id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
89874b6cbda6021bd89a36798b7e7c08dad59645
The Ontological Properties of Social Roles: Definitional Dependence, Powers and Roles Playing Roles Guido Boella (guido@di.unito.it) Dipartimento di Informatica - Università di Torino - Italy Leendert van der Torre (torre@cwi.nl) CWI - Amsterdam and TU Delft The Netherlands Abstract. In this paper we address the problem of defining social roles in MAS. Social roles provide the basic structure of social institutions and organizations. We start from the properties attributed to roles both in the MAS and the Object Oriented community, and we use them in an ontological analysis of the notion of social role. We thus identify as the main properties of social roles being definitionally dependent on the institution they belong to, attributing powers to the agents playing them, and allowing roles to play roles. The methodology we use to model roles is the agent metaphor: social roles, in the same way as social institutions, like normative systems and organizations, are attributed mental attitudes to explain their behavior. 1. Introduction The social structures developed in multiagent systems are often proposed in the design of open systems as a solution for controlling the autonomy of the different participants (Artikis et al., 2002). A key notion in the social structure of a MAS is that of social role. Social roles allow to specify the activities delegated by a social institution to individuals to achieve its purpose, while abstracting from the individuals which will eventually play them. The description of a social role is given, e.g., in terms of rights, permissions and obligations (i.e., normative descriptions) (Pacheco and Carmo, 2003), expectations, standardised patterns of behavior (Esteva et al., 2001), social commitments (Cavedon and Sonenberg, 1998; Fasli, 2001), goals and planning rules (Dastani et al., 2003). Even if social roles have such a central position in MAS coordination, there are still some problems. First, it is not clear which are the desired properties and how to realize them. Second, normative descriptions are mostly limited to rights, while the notion of power seems relevant as well. We want to extend the notion of social role in Agent Oriented systems (AO), and to make it more concrete we use ideas and concepts from the properties of roles discussed in the Object Oriented paradigm (OO). A side-effect is that a unified model of roles in AO and OO not only impacts in AO, but also in OO. Roles are central not only in MAS, but also in Object Oriented modelling and programming. Roles in OO are used to dynamically add behaviors to objects, to factorize features of objects like methods or ac- cess rights, to separate the interactional properties of objects from their core behavior, and to allow exogenous coordination (Baldoni et al., 2005). This paper addresses the following questions: − What are the desirable properties of social roles in MAS? − How to build a model satisfying these properties? To extend the existing ideas in AO and to use the ideas in OO, we refer to social theory, which suggests: − Roles are always involved in a relationship with another entity, which seems to come first: roles belong to organizations and institutions which define them; hence, they are social roles. − Concerning the normative positions, besides rights and permissions, social roles are associated to powers in the institution they belong to. − If roles can play roles as any other agent, then social roles should be considered as a kind of agent. Besides treating roles as usual in both AO and OO as first class citizens of the theory, here social roles are treated as agents. However, social roles are not autonomous, and they should therefore be treated as agents of a special kind. We call this methodology the agent metaphor. Though at first sight social roles are anything but agents we treat social roles as agents because we attribute mental attitudes to agents, as done by (Boella and van der Torre, 2004a; Boella and van der Torre, 2004b); this has as additional benefit that we can reuse for social roles existing theories, models and tools developed for agents. Analogously, social institutions can be described in the agent metaphor as agents to which mental attitudes are attributed. We apply the methodology used by (Boella and van der Torre, 2006) to describe and reason about other social entities like groups, virtual communities, contracts, and normative multiagent systems. In the next section we analyse the properties which are commonly attributed to roles in AO and OO. Then, we present their basic properties in our model: the definitional dependence in Section 3, the powers of roles in Section 4 and roles playing roles in Section 5, where the agent metaphor is further discussed. Then, in Section 6 we present our formal model of roles in MAS. Conclusions close the paper. 1. Agents can play multiple roles, and in some approaches they can even have to play a role. 2. Roles are views on agents. 3. Individual are uncoupled from roles. E.g., attributes like wage are associated to the employee role rather than to individuals. 4. Roles enhance reusability: the same role can be used by different agents. 5. Roles define expected behavior and obligations. E.g., a program chair is expected and obliged to select the papers of a conference. 6. Roles define sets of rights and permissions. E.g., the access rights. 7. Roles embed behavior specific to a context, like a group, which forms a subenvironment of coherent roles. 8. Roles define common interactions and embed information and capabilities needed to communication and coordination. E.g., the roles of auctioneer and bidder in an auction, each with their possible moves. 9. Promote an organizational view of the system, where roles are coordinating rather than coordinated entities. Figure 1. The properties of roles in AO. 2. Properties of roles In their survey about roles in MAS, (Cabri et al., 2004) identify several properties attributed to roles, which are illustrated in Figure 1. There are two problems. First, it is not clear which model of role can support all these properties. The second problem is that rights are a too limited notion. It suffices for role-based access, but in general we also need powers to specify normative positions. The properties attributed to roles in the Object Oriented community are summarized by (Steimann, 2000). In Figure 2, we show here how these properties are also relevant for agents by giving some examples. These properties only partially overlap with the previous list. In particular, properties 5–9 of Figure 1 assume that agents are autonomous, they can violate obligations, they interact with each other, and they form social institutions like organizations and groups. The properties discussed in OO are more concrete and talk about roles as adjunct instances to objects (11), states of roles (7), sequences of acquisitions (6), identity (14), polymorphism (7) and other phenomena, and thus address the first problem discussed in the paragraph above. However, they do not help with the generalization of rights to other powers. Moreover, this more concrete descriptions also give rise to two new questions. First, the fact that roles depend on relationships with other entities implies that first come these other entities, then the roles. Second, roles playing roles imply a kind of role hierarchy. Groups and contexts are not sufficient to model all aspects of this. We need to model role as a non autonomous notion of agent. 1. A role comes with its own properties and behavior. Hence, it is a type. E.g., a director of a department commands other members and makes buy-orders. 2. Roles depend on relationships: e.g., a student is related to a school, etc. 3. An object may play different roles. E.g., a person can be both a student and an employee. 4. An object may play the same role several times. E.g., a person can hold several employments. 5. An object may acquire and abandon roles dynamically. E.g., a person can acquire the role of student or of employee. 6. The sequence in which roles may be acquired and relinquished can be subject to restrictions. E.g., a person becomes a teaching assistant only if it is a student. 7. Objects of unrelated types can play the same role. E.g., both a person and an organization are customers. 8. Roles can play roles. E.g., an employee can be a project leader: a role of the employee. 9. A role can be transferred from one object to another. E.g., the salary of an open position may be specified independently of the person that will be employed. 10. The state of an object can vary depending on the role in which it is being addressed: this should be viewed as a separate instance of the object. E.g., an employee has an address per job and also a private one. 11. If an object plays several roles simultaneously, it responds according to the role in which it is being addressed. E.g., an person gives the address of the employee it is playing. 12. Roles restrict access. This corresponds to an object having different perspectives, facets, or aspects. E.g., the private phone number of an employee can be invisible when the person is playing the employee role. 13. Different roles may share structure and behavior. This usually means that role definitions inherit from each other. E.g., the role student can have associated the behavior of giving exams, and more specific roles (like first year student) inherit this behavior. 14. An object and its roles share identity. Since roles do not exist by themselves they cannot have an identity. 15. An object and its roles have different identities. This view solves the so-called counting problem. E.g., the number of passengers of an airline can be greater than the number of person who travelled with it. Figure 2. The properties of roles in OO. Thus, there are three open problems: how to define dependencies of social roles on relationships, how to extend normative positions from rights to powers, and how to model social roles as agents that play roles. These issues are discussed in the following three sections. We support these properties by means of an ontological analysis of the notion of social role. Roles deserve an ontological analysis in that they are among the basic notions of an ontology besides the notion of natural type, substance, property, and relation. Ontological analysis aims at identifying the metaproperties distinguishing roles from those other notions, as done by (Masolo et al., 2004). 3. **Definitional dependence** Social theory considers social roles as a way to structure organizations so to distribute responsibilities. Thus, for social theory, roles exist only in function of the organization they belong to. This feature has been recognized also by ontological analysis of roles. (Guarino and Welty, 2002) notice two characteristic properties of roles distinguishing them from natural types: roles are non-rigid entities and do not exist independently from other entities. Rigidity means that an entity can stop to play a role without losing its identity. E.g., a person can stop being a student, but not a person. The dependence of a role, as suggested by the work of (Sowa, 2000) and (Guarino and Welty, 2002), is a consequence of the fact that a role is meaningful only in the context of a relationship with another entity. This property is also called foundation: a role must always be associated with another entity through some relationship. Some hints of this ontological property of roles could already be found in the literature. In the traditional approach to roles in linguistics, words are always related to other words: every word in a sentence has slots to be filled by others; e.g., a verb like eating has an agent and patient role. Concerning conceptual modelling, in UML a role is correlated by an association to other roles. In Agent-UML a role is related to a group (Bauer et al., 2001) The dependence of a role from another entity is not contingent, but it rests in the definition itself of the role. For this reason, (Fine, 1995) introduces the following notion of dependence: “to say that an object \( x \) depends upon an \( F \) is to say that an \( F \) will be ineliminably involved in any definition of \( x \)”. This notion is elaborated by (Masolo et al., 2004) into the notion of *definitional dependence*: e.g., the definition of the concept of student makes reference not to a specific school but to the concept of school, the employee to the concept of organization, the director to the concept of department, the president to the concept of state, etc. We believe, however, that this definitional dependence should be interpreted in an even stronger way. First of all, not only social roles all depend on other entities, but the entities they depend on all belong to a common category; they all are social entities: groups, organizations, departments, states, etc. In a word, *social institutions*. Secondly, not only social roles do not exist without social entities, but, in turn, roles are essential to them: there is no state without a president, no school without a student. Hence, we adopt a stronger notion of definitional dependence. We say that the definition of the social institution \( F \) the social role \( x \) belongs to contains the definition of the role \( (x) \). E.g., the social role of president of a state is defined in the constitution of that state. The role president does not exist without the state and its definition, but also the state itself is not the same without the role of president: its definition would be different. 4. Roles, powers and institutions According to Property 6 of Figure 1 rights and permissions are a fundamental feature of normative positions of roles. Rights are used to regulate access to resources by agents playing roles, e.g., in role based access control (RBAC). However, as, amongst others, (Makinson, 1986) has noticed, the terms right and permission often should be intended in the sense of institutional power. The notion of power is certainly relevant here, since, e.g., a director of a department has not only the right to give commands to the employee, but, above all, it has the power to do so. But, as witnessed also by (Dastani et al., 2004)’s survey, the MAS model of role is mostly limited to rights. Moreover, in Figure 1, roles are associated to new capabilities. In Figure 2, roles are associated with behaviors (1). Roles as a way of grouping context dependent behavior do not explain why we need roles to do this grouping and not simply the notion of class, albeit a dynamic one. We claim that the reason is because these capabilities have a peculiar character: they are powers. Again, some insights can be gained by considering which capabilities are added to a social role. They can be grouped in three categories: - Actions of the role that are recognized as actions of the institution: e.g., a director’s signature on a buy-order is considered as a commitment of its department to pay for the requested item. - Actions of the agent playing the role that can modify the state of the role itself. E.g., a director can commit itself to new responsibilities. - Interaction capabilities with other roles in the same institution. An agent in a role can send a message to another role, e.g., a director can give a command to an employee. Not only social roles do not exist without social entities, but they cannot do anything without their consent. The reason is that social entities are not material entities: agents playing roles cannot do anything for affecting them. Social institutions are socially constructed entities which exist thanks to the collective acceptance by agents of the regulative and constitutive rules regulating them. In particular, they are created by means of the notion of constitutive rule introduced by (Searle, 1995). Searle argues that there is a distinction between two types of rules: Some rules regulate antecedently existing forms of behaviour. [...] Some rules, on the other hand, [...] create the possibility of or define that activity. The activity of playing chess is constituted by action in accordance with these rules. The institutions of marriage, promising [...] are systems of such constitutive rules or conventions. Constitutive rules have of the form “such and such an X counts as Y in context C” where X is any object satisfying certain conditions and Y is a label that qualifies X as being something of an entirely new sort: an institutional fact. Examples of constitutive rules are “X counts as a presiding official in a wedding ceremony”, “this bit of paper counts as a five euro bill” and “this piece of land counts as somebody’s private property”. Thus, institutions are composed of regulative and constitutive rules. But since social roles are defined by the institution they are defined in turn in terms of constitutive rules and regulative rules attributed to them by the institution. Since constitutive rules are at the basis of an institution and of roles, an agent can act in the institution only if for the institution the agent’s actions “count as” some institutional fact. In this sense, the new capabilities added by the role are given by the institution; the role is empowered by the institution: the actions which it performs in its role “count as” (Searle, 1995) actions of the institution itself. We can explain the three different kinds of powers discussed above as different kinds of constitutive rules. First of all, actions of the player of the role “count as” institutional facts according to some constitutive rule of the institution. So it can affect the institution. Secondly, if the constitutive rules creating an institutional fact belong to the role the agent it is playing, the agent can affect its role. Thirdly, if the constitutive rule belongs to some other role of the institution, the agent in playing its role can affect this other role. The effects of the action of a player of the role are not limited to making true institutional facts. Institutional facts can have, in turn, an effect on the institution and on the roles, via other constitutive rules introducing new constitutive and regulative rules. For example, the signature of the director “counts as” the commitment of the department (i.e., a new obligation) to pay for the delivered goods. And the command of the director “counts as” an obligation for the members of the department. Finally, note that if we consider the possibility that a role is changed by the exercise of a power from another role we admit implicitly that a role is not only a type specifying the behavior expected by the player. Rather, a role is an instance which its own state. This state specifies the expected behavior of an agent, and this specification can vary over time according to the exercise of power by the player, the institution and by other roles. The counterpart of strong definitional dependence and its ability to make roles access the institution’s state, in the Object Oriented world, is that roles should be defined in the definition of an object, i.e., a class, that determines its scope: all roles should be encapsulated in other classes. 5. Roles playing roles Another important feature of roles is that roles can play roles as well. For example, an employee can be a project leader, a member of the board can be the CEO of an enterprise, etc. Roles are usually played by agents: a person can be an employee, a member of a club, of a board or an MP. But how can a role play a role? This is possible only if an agent and a role share some properties. As we will see in the next section, this is possible in our model since roles are described as agents, i.e., they are attributed mental attitudes as well. Note that in many models, e.g., (Dahchour et al., 2002), roles do no play roles, and a role like project leader is modelled simply as specification of the employee role. However, this solution relies on a type specification hierarchy of roles and requires introducing dynamic reclassification. Instead, our approach does not require this feature, but it allows anyway to create a hierarchy among roles: the hierarchy is based on the inherently dynamic played-by relation between roles and agents, rather than on a specification relation. The methodology of our work is inspired to the agent metaphor of (Boella and van der Torre, 2006). They model entities of social reality like groups, normative systems, organizations and roles as agents. Their ontological claim is that social reality is difficult to understand for humans, even if humans themselves create it. Hence, to understand social reality humans resort to metaphorically mapping the social domain in a better known domain: the domain of agents. Social entities exist because they are collectively accepted by agents (Searle, 1995). To define the behavior of social entities, they are collectively attributed by the agents’ mental attitudes. This metaphorical mapping allows to explain the features of social entities in terms of the features of agents. In particular, in this mapping a social institution can be considered as an agent where the regulative norms, like obligations and permissions, are mapped into the goals of an agent; the constitutive norms creating powers are mapped into the beliefs of the agent. Moreover, the institution, as a normative system, is supposed to have an autonomous behavior as an agent has: it aims at restoring the regularities prescribed by norms by means of monitoring violations and sanctioning them. The metaphor, however, stops here since social entities cannot act in the world. Monitoring and sanctioning are carried out by real agents working in the institution. Roles in sociology are often described as expected behavior. To describe behavior, agent theory uses beliefs, desires and goals. Hence, roles can be considered as agent descriptions. This is different from the fact that roles are also played by agents, their actors. Since roles are considered as agents, they can play roles in turn. In the metaphorical mapping (Boella and van der Torre, 2004a; Boella and van der Torre, 2004b) of the role’s expertise are represented by beliefs of the agent and its responsibilities as the goals of the agent. To play a role an agent has to adopt the goals representing its responsibilities and to carry out them according to the beliefs representing its expertise: the player has to act as if it had the beliefs and goals of the role. In the same way as social entities are constructed by the collective attribu- tion of mental entities by agents, roles exist only because they are attributed mental attitudes by the institution they belongs to. The institution is thus defined by its beliefs and goals representing constitutive and regulative rules and by the beliefs and goals it attributes to its roles. While (Boella and van der Torre, 2004a; Boella and van der Torre, 2004b) focus on responsibilities of roles, in this paper we focus on their powers. 6. Formalization of roles In this section we introduce our model of roles and institutions. First of all, a set of propositional variables $X$ describes the different as- pects of the world, and rules $Rul(X)$ are used to represent mental attitudes. Secondly, we consider different sorts of agents $A$. Besides real agents $RA$ (either human or artificial) we consider as agents in the model also social institutions $SA$, like groups, normative systems and organizations, and roles $RO$ composing the structure of agents in $SA$. By mental attitudes we mean beliefs $B$, desires $D$ and goals $G$. Mental attitudes are described by rules. Moreover, different mental attitudes are at- tributed to the agents by the agent description relation $AD$. It associates to each agent a set of beliefs, desires and goals. Moreover, $AD$ associates also agents to agents, because groups, normative systems, organizations, and roles as agents exist only as profiles attributed to them by real agents. So social institutions and roles exist only as they are described as agents by real agents, according to the agent description relation. DEFINITION 1 (MAS). Let $X$ be a set of variables. The set of literals built from $X$, written as $Lit(X)$, is $X \cup \{ \neg x \mid x \in X \}$, and the set of rules built from $X$, written as $Rul(X) = 2^{Lit(X)} \times Lit(X)$, is the set of pairs of a set of literals built from $X$ and a literal built from $X$, written as $\{l_1, ..., l_n\} \rightarrow l$. We also write $l_1 \land ... \land l_n \rightarrow l$ and when $n = 0$ we write $T \rightarrow l$. A multiagent system is a tuple $\langle RA, SA, RO, X, B, D, G, AD, MD, \geq, I, PL \rangle$ where: - The real agents $RA$, social institutions $SA$ and roles $RO$, proposi- tional variables $X$, beliefs $B$, desires $D$, and goals $G$ are all finite disjoint sets. We write $RA \cup SA \cup RO = A$ for the set of all agents and $M = D \cup G$ for their motivations. An agent description $AD : A \rightarrow 2^{A \cup X \cup B \cup D \cup G}$ is a complete function that maps each agent to other agents that exist in its profile, sets of variables (its decision variables), and its beliefs, desires and goals. For each agent $a \in A$, we write $A_a$ for $A \cap AD(a)$, and $B_a$ for $B \cap AD(a)$, et cetera. We write parameters $P = X \cup \bigcup_{a \in A} X_a$. The mental description $MD : B \cup D \cup G \rightarrow Rul(X)$ is a complete function from the sets of beliefs, desires and goals to the set of rules built from $X$. We write $m \ x \rightarrow y$ for: $m$ such that $MD(m) = x \rightarrow y$. A priority relation is used to resolve conflicts among motivational attitudes: $\geq : A \rightarrow 2^M \times 2^M$ is a function from agents to a transitive and reflexive relation on the powerset of the motivations containing at least the subset relation. We write $\geq_a$ for $\geq(a)$. The institutional facts $I \subseteq P$ are parameters. The role playing function $PL : RO \rightarrow RA$ associates a role to its player. The set of variables whose truth value is determined by an agent (decision variables representing actions) are distinguished from those which are not directly determined by the agent ($P$, the parameters). Only real agents act in the world, while social institutions act only through the agents playing roles in them. For this reason, social institutions are not associated with decision variables ($\bigcup_{a \in SA \cup RO} X_a = \emptyset$). Besides, “institutional facts” $I$ are states of affairs which exist only inside normative systems and organizations. As discussed in Section 4, (Searle, 1995) suggests that money, properties, marriages exist only as part of social reality; since we model social reality by means of the attribution of mental attitudes to social entities, institutional facts are just in the beliefs of these agents. EXAMPLE 1. $MAS = \langle RA, SA, RO, X, B, D, G, AD, MD, \geq, I, PL \rangle$ with $RA = \{A\}$, $SA = \{O\}$, $RO = \{B\}$, $P = \{p, q, r, s\}$, and $X, P, B, D, G, AD, MD$ and $\geq$ are implicitly given by the following table: <table> <thead> <tr> <th>A</th> <th>O</th> <th>B</th> <th>——-</th> </tr> </thead> <tbody> <tr> <td>B</td> <td>$d_1$</td> <td>$T \rightarrow p$</td> <td></td> </tr> <tr> <td>D</td> <td>$g_1$</td> <td>$T \rightarrow x_1$</td> <td>$g_2$</td> </tr> <tr> <td>X</td> <td>$x_1, x_2$</td> <td>——-</td> <td>——-</td> </tr> <tr> <td>$\geq$</td> <td>$d_1 &gt; g_1 &gt; g_2 &gt; g_3$</td> <td>$g_2 &gt; g_3 &gt; d_1 &gt; g_1$</td> <td>$g_3 &gt; g_2 &gt; d_1 = g_1$</td> </tr> <tr> <td>PL</td> <td>B</td> <td>——-</td> <td></td> </tr> </tbody> </table> This table should be read as follows. There are three agents, one real agent $\text{A}$, a social institution $\text{O}$ and a role $\text{B}$ of the institution, played by $\text{A}$. The $\text{A}$ row specifies which profiles are attributed by each agent: agent $\text{A}$ attributes profile $\text{O}$ to the institution, and the institution in turn defines role $\text{B}$ by attributing to it the mental attitudes specified in the last column. The long dashes in a cell represent that the field cannot have a value. Agent $\text{A}$ has, among others, a desire $d_1 \ (MD(d_1) = T \rightarrow \neg p)$, and the institution has a goal $g_2$ which can be realized by an action of agent $\text{A} x_2$ since $MD(b_1) = x_2 \rightarrow p$. Finally, only a fragment of the priority relation is given, because it is only given for singleton motivations, whereas it is defined over sets of motivations. It says that each agent gives highest priority to its own motivations. The table can be extended to deal with more detailed motivations in the obvious way. Social institutions like normative systems and organizations are able to change themselves. E.g., they specify how their norms can be modified. Since social institution depend on the attribution of mental attitudes which define both the regulative and constitutive norms, we represent their modification by means of the modification of their mental attitudes expressed as rules. We adopt here a relatively simple solution for adding, revising and removing rules from a rule base; it is based on the assumption that all relevant beliefs, desires and goals are already present in the system, such that we only have to adapt the agent description $AD$. An advantage of this construction is that the priorities of the desires and goals are also already defined in the multiagent system, and we do not have to introduce an update mechanism. Additions (a.k.a. expansions) to the agent description are defined as $+ : A \times (B \cup D \cup G) \rightarrow I$, i.e., as for each agent mappings from mental attitudes to institutional facts. Since institutional facts $I$ like the additions exist only in the beliefs of a normative system or an organization, we need a way to express how these beliefs can be made true. The relations among propositional variables are expressed as belief rules. Rules concerning beliefs about institutional facts are called constitutive rules and represent the “counts-as” relations introduced by (Searle, 1995). **DEFINITION 2 (Counts-as).** Given $MAS = \langle RA, SA, RO, X, B, D, G, AD, MD, \geq, I, PL \rangle$ Counts-as conditionals $CA \subseteq B_o$ of constitutive norms are beliefs of a social institution $o \in SA$, such that constitutive rules $CR = MD(CA)$ are the set of rules whose heads are elements of literals built out of institutional facts $\text{Lit}(I)$. We write counts-as$_o(Y, p)$ where $Y \subseteq \text{Lit}(X)$ and $p \in I$ if $\exists m \in CA$ such that $MD(m) = Y \rightarrow p$. **EXAMPLE 2 (continued).** Given $I = \{r, s\}$. role.tex; 2/10/2005; 17:08; p.11 DEFINITION 3 (SMAS). A self modifying MAS is defined as \[ RA, SA, RO, X, B, D, G, AD, MD, \geq, I, PL, +, CA \] with - Additions \( + : A \times (B \cup D \cup G) \rightarrow I \). We write \( +_a(m) \) for \( + (a, m) \). The update of a SMAS by a set of literals \( L \subseteq \text{Lit}(I) \) is \( AD'_a = AD_a \cup \{m | +_a(m) \in L\} \). EXAMPLE 3 (continued). We introduce additions: The institutional fact \( r \) (performed via \( A \)'s \( x_1 \)) "counts as" \( b_4 \) adding \( b_5 \) of the beliefs of the institution: this means that the agent \( A \) has the power to express the opinion of the institution it belongs to. Moreover, the institutional fact \( s \) (performed via \( A \)'s \( x_2 \)) "counts as" \( b_6 \) the introduction of a goal \( g_4 \) in the state of the role \( B \): \( A \) has the power to commit the role \( B \) to a certain goal by means of its actions. The consequences of belief rules are incorporated via a logic of rules called \textit{out}. It takes the transitive closure of a set of rules, which can be extended in the process, and it is an extension of reusable throughput in input/output logic (Makinson and van der Torre, 2000) with generator revision. 5 Roles define expected behavior and obligations: a role’s goals are its responsibilities. 6 Roles define sets of rights: since institutions are modelled as normative systems they can associate not only obligations but also rights and authorizations to roles. 7 Roles embed behavior specific to a context; roles exist only and because of the institution they belong to: the institution is the context of a role defining its specific behavior. 8 Define common interactions: constitutive rules define also how an action of a player of a role affects the beliefs and goals of another role, thus allowing communication. 9 Promote an organizational view of the system: roles compose the organizational structure of an institution and the institution gives them the power to exogenously coordinate its own behavior. Figure 3. Some properties of roles in AO. DEFINITION 4 (Consequences). \( U = 2^{\text{Lit}(X)} \to 2^{\text{Rul}(X)} \) are the possible add lists of sets of rules in situations. out is a function from set of rules, sets of formulas and add lists to new sets of rules and sets of formulas: \( \text{out} : 2^{\text{Rul}(X)} \times 2^{\text{Lit}(X)} \times 2^U \to 2^{\text{Lit}(X)} \). Let \( \text{out}(E, S, R) \) be the closure of \( S \) under the rules \( E \) updated by added rules \( R \), defined as follows. \[ \begin{align*} - \text{out}^0(E, S, R) & = E \\ - \text{out}^0(E, S, R) & = S \\ - \text{out}^{i+1}(E, S, R) & = \text{out}^i(E, S, R) \cup R(\text{out}^i(E, S, R)) \\ - \text{out}^{i+1}(E, S, R) & = \text{out}^i(E, S, R) \cup \{ l | L \to l \in \text{out}^i(E, S, R), L \subseteq \text{out}^i(E, S, R) \} \\ - \text{out}(E, S, R) & = \bigcup_{i=0}^\infty \text{out}^i(E, S, R) \end{align*} \] Here we are interested in the closure of a decision under a set of belief rules. The new belief rules of an agent \( a \) in situation \( S \) is \( R^+_a \), defined by \( R^+_a(S) = \{ MD(b) \mid b \in B, +_a(b) \in S \} \). We finally introduce decisions of agents; they must be consistent with the consequences of beliefs according to the two agents \( A \) (\( \text{out}(B^+_A, \delta, R^+_A) \)). The set of decisions \( \Delta \) is the set of sets \( \delta_A \subseteq \text{Lit}(X) \) such that their closures under the beliefs \( \text{out}(B^+_A, \delta, R^+_A) \) do not contain a variable and its negation. EXAMPLE 4 (Continued). \( \text{out}(B_B, \{x_1, x_2\}, R^+_B) = \{x_1, x_2, s, +B(g_4)\} \). According to role \( B \), A’s decision \( x_2 \) leads to \( s \) and adds goal \( b_4 \). 1. Roles have properties, for example, each role has its own beliefs and goals. Moreover, roles have behavior in the sense that they can execute institutional actions via the actions of their players. 2. Dependence of roles from relationships is implied by the stronger notion of definitional dependence: the relation they depend on is the relation between the role and the social institution which defines it. 3. An agent may play different roles simultaneously: the role playing function is not surjective. 4. An agent may play the same role several times: a role is not defined by its beliefs and goals only, but also by the institution attributing them to the role. The same roles in two different institutions are different roles and nothing prevents to play both. 5. An agent may acquire and abandon roles dynamically: to play a role it is sufficient to know which beliefs and to adopt which goals the player is expected to have. The model can be extended with constitutive rules which affect the role playing relation. 6. The sequence in which roles may be acquired and relinquished is subject to restrictions which are specified in the constitutive rules of the social institution. 7. Objects of unrelated types can play the same role: to play a role it is necessary to be able to perform the actions which “count as” actions of the institution. A different issue is if the agent is suited to play a role, i.e., which are its beliefs and motivations. 8. Roles can play roles since roles are defined as agents and agents can play roles. 9. A role can be transferred from one agent to another: the new player is expected to behave as if it has the current beliefs and goals attributed to the role. 10. The state of an agent is role-specific: the agent’s powers change with the role it is playing. 11. Features of an agent can be role-specific: according to its role, the agent has to act as if it has the beliefs and goals of the role. 12. Roles restrict access. Roles are accessed only via powers. 13. Different roles may share structure and behavior: role definitions can be organized in a hierarchical way. 14. An agent and its roles share identity. Roles are not real agents, but only descriptions of agents. So they have no identity as agents. 15. An agent and its roles have different identities and roles are instances. Figure 4. The properties of roles in OO revisited. 7. Conclusion In this paper, we analyse the properties of social roles and we provide a simple formal model of social institutions with roles. The main properties we attribute to roles are three. First their definitional dependence: social roles exist only as they are defined by some social institution; second, besides rights and permissions, social roles are associated to powers in the institution they belong to. Finally roles can play roles as any other agent, since in our model social roles should be considered as an agent. For this reason as methodology we use the agent metaphor: both social institutions and social roles are modelled as a kind of agent, since they are attributed mental attitudes to describe their behavior. When attributing mental attitudes to social entities, we show that regulative rules can be defined as goals of the social institution and constitutive rules as its beliefs, like in (Boella and van der Torre, 2006). In Figure 3 and 4, we reconsider the properties attributed in Section 2 to roles by, respectively, AO and OO, and we show how they are dealt with in our model. Since the two lists of properties overlap, in Table 3 we focus only on the properties which require the autonomy of the agent playing the role. Future work is how to use the current model to propose social roles in agent communication languages. Finally, our model of roles is being used as the basis to introduce roles in Object Oriented programming languages like Java. In this way we offer a unified notion of roles in both Agent Oriented systems and Object Oriented ones. The integration between agent systems and more traditional OO systems can be fostered, e.g., agents can be used to play roles in OO systems. Moreover, agent systems implemented using OO architectures and languages can have already at disposal roles as a primitive. References
{"Source-Url": "http://www.di.unito.it/~guido/PS/boella-torre-ailaw07a.pdf", "len_cl100k_base": 9050, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 43788, "total-output-tokens": 11055, "length": "2e13", "weborganizer": {"__label__adult": 0.0004832744598388672, "__label__art_design": 0.0011320114135742188, "__label__crime_law": 0.00081634521484375, "__label__education_jobs": 0.012237548828125, "__label__entertainment": 0.0002390146255493164, "__label__fashion_beauty": 0.0003733634948730469, "__label__finance_business": 0.0017309188842773438, "__label__food_dining": 0.00060272216796875, "__label__games": 0.0015354156494140625, "__label__hardware": 0.0006265640258789062, "__label__health": 0.0010967254638671875, "__label__history": 0.0010013580322265625, "__label__home_hobbies": 0.0003075599670410156, "__label__industrial": 0.000820159912109375, "__label__literature": 0.004375457763671875, "__label__politics": 0.00152587890625, "__label__religion": 0.000949859619140625, "__label__science_tech": 0.41845703125, "__label__social_life": 0.0009908676147460938, "__label__software": 0.026336669921875, "__label__software_dev": 0.52294921875, "__label__sports_fitness": 0.00032520294189453125, "__label__transportation": 0.000919342041015625, "__label__travel": 0.0002818107604980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42134, 0.0183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42134, 0.75744]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42134, 0.92592]], "google_gemma-3-12b-it_contains_pii": [[0, 2725, false], [2725, 4937, null], [4937, 7605, null], [7605, 10610, null], [10610, 13726, null], [13726, 16414, null], [16414, 19337, null], [19337, 22349, null], [22349, 25140, null], [25140, 27674, null], [27674, 30763, null], [30763, 31979, null], [31979, 34523, null], [34523, 37457, null], [37457, 40627, null], [40627, 42134, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2725, true], [2725, 4937, null], [4937, 7605, null], [7605, 10610, null], [10610, 13726, null], [13726, 16414, null], [16414, 19337, null], [19337, 22349, null], [22349, 25140, null], [25140, 27674, null], [27674, 30763, null], [30763, 31979, null], [31979, 34523, null], [34523, 37457, null], [37457, 40627, null], [40627, 42134, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42134, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42134, null]], "pdf_page_numbers": [[0, 2725, 1], [2725, 4937, 2], [4937, 7605, 3], [7605, 10610, 4], [10610, 13726, 5], [13726, 16414, 6], [16414, 19337, 7], [19337, 22349, 8], [22349, 25140, 9], [25140, 27674, 10], [27674, 30763, 11], [30763, 31979, 12], [31979, 34523, 13], [34523, 37457, 14], [37457, 40627, 15], [40627, 42134, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42134, 0.03286]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
3e0ce0a772fafddcf4107cb1dd5ba00655a337df
[REMOVED]
{"Source-Url": "https://pure.tue.nl/ws/files/2201756/Metis214383.pdf", "len_cl100k_base": 10792, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 49434, "total-output-tokens": 12859, "length": "2e13", "weborganizer": {"__label__adult": 0.000370025634765625, "__label__art_design": 0.0005965232849121094, "__label__crime_law": 0.000640869140625, "__label__education_jobs": 0.0033702850341796875, "__label__entertainment": 0.00014925003051757812, "__label__fashion_beauty": 0.00022494792938232425, "__label__finance_business": 0.000675201416015625, "__label__food_dining": 0.000484466552734375, "__label__games": 0.0007543563842773438, "__label__hardware": 0.0007004737854003906, "__label__health": 0.0007448196411132812, "__label__history": 0.0004565715789794922, "__label__home_hobbies": 0.0001569986343383789, "__label__industrial": 0.000743865966796875, "__label__literature": 0.000980377197265625, "__label__politics": 0.0004563331604003906, "__label__religion": 0.0007081031799316406, "__label__science_tech": 0.197021484375, "__label__social_life": 0.00022161006927490232, "__label__software": 0.034088134765625, "__label__software_dev": 0.75537109375, "__label__sports_fitness": 0.0002460479736328125, "__label__transportation": 0.00069427490234375, "__label__travel": 0.0002586841583251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44885, 0.01745]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44885, 0.47235]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44885, 0.85254]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2380, false], [2380, 4146, null], [4146, 6980, null], [6980, 9751, null], [9751, 12806, null], [12806, 16588, null], [16588, 19711, null], [19711, 21898, null], [21898, 24434, null], [24434, 26661, null], [26661, 29803, null], [29803, 33172, null], [33172, 35318, null], [35318, 38562, null], [38562, 41506, null], [41506, 44885, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2380, true], [2380, 4146, null], [4146, 6980, null], [6980, 9751, null], [9751, 12806, null], [12806, 16588, null], [16588, 19711, null], [19711, 21898, null], [21898, 24434, null], [24434, 26661, null], [26661, 29803, null], [29803, 33172, null], [33172, 35318, null], [35318, 38562, null], [38562, 41506, null], [41506, 44885, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44885, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44885, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2380, 2], [2380, 4146, 3], [4146, 6980, 4], [6980, 9751, 5], [9751, 12806, 6], [12806, 16588, 7], [16588, 19711, 8], [19711, 21898, 9], [21898, 24434, 10], [24434, 26661, 11], [26661, 29803, 12], [29803, 33172, 13], [33172, 35318, 14], [35318, 38562, 15], [38562, 41506, 16], [41506, 44885, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44885, 0.11934]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
f17e275d08e37b6021adb87ce6503b523edf46e5
[REMOVED]
{"len_cl100k_base": 13351, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 74608, "total-output-tokens": 16327, "length": "2e13", "weborganizer": {"__label__adult": 0.0004458427429199219, "__label__art_design": 0.0006546974182128906, "__label__crime_law": 0.000606536865234375, "__label__education_jobs": 0.0019245147705078125, "__label__entertainment": 0.00019276142120361328, "__label__fashion_beauty": 0.0002593994140625, "__label__finance_business": 0.0004799365997314453, "__label__food_dining": 0.0004422664642333984, "__label__games": 0.0012149810791015625, "__label__hardware": 0.001194000244140625, "__label__health": 0.000732421875, "__label__history": 0.00044655799865722656, "__label__home_hobbies": 0.0001888275146484375, "__label__industrial": 0.0008044242858886719, "__label__literature": 0.0007281303405761719, "__label__politics": 0.0005011558532714844, "__label__religion": 0.0006747245788574219, "__label__science_tech": 0.236083984375, "__label__social_life": 0.00016260147094726562, "__label__software": 0.01241302490234375, "__label__software_dev": 0.73876953125, "__label__sports_fitness": 0.0003514289855957031, "__label__transportation": 0.0007386207580566406, "__label__travel": 0.00022912025451660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55885, 0.02676]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55885, 0.58402]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55885, 0.85736]], "google_gemma-3-12b-it_contains_pii": [[0, 2999, false], [2999, 5751, null], [5751, 8466, null], [8466, 11502, null], [11502, 14690, null], [14690, 16786, null], [16786, 19562, null], [19562, 21008, null], [21008, 23017, null], [23017, 26089, null], [26089, 28445, null], [28445, 31141, null], [31141, 33117, null], [33117, 36519, null], [36519, 38092, null], [38092, 40942, null], [40942, 43885, null], [43885, 45792, null], [45792, 48998, null], [48998, 51925, null], [51925, 53683, null], [53683, 54850, null], [54850, 55885, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2999, true], [2999, 5751, null], [5751, 8466, null], [8466, 11502, null], [11502, 14690, null], [14690, 16786, null], [16786, 19562, null], [19562, 21008, null], [21008, 23017, null], [23017, 26089, null], [26089, 28445, null], [28445, 31141, null], [31141, 33117, null], [33117, 36519, null], [36519, 38092, null], [38092, 40942, null], [40942, 43885, null], [43885, 45792, null], [45792, 48998, null], [48998, 51925, null], [51925, 53683, null], [53683, 54850, null], [54850, 55885, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55885, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55885, null]], "pdf_page_numbers": [[0, 2999, 1], [2999, 5751, 2], [5751, 8466, 3], [8466, 11502, 4], [11502, 14690, 5], [14690, 16786, 6], [16786, 19562, 7], [19562, 21008, 8], [21008, 23017, 9], [23017, 26089, 10], [26089, 28445, 11], [28445, 31141, 12], [31141, 33117, 13], [33117, 36519, 14], [36519, 38092, 15], [38092, 40942, 16], [40942, 43885, 17], [43885, 45792, 18], [45792, 48998, 19], [48998, 51925, 20], [51925, 53683, 21], [53683, 54850, 22], [54850, 55885, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55885, 0.03716]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
05752406972c5d107d3cc670af2cdc93375a126c
Evolutive Graphics with Linked Data Carlos Saito Murata Abstract Evolutive Graphics with Linked Data Carlos Saito Murata Data visualization in journalism became in the last few years an important knowledge area that mixes both journalism and computer science. This project is focused on data that evolves over time, its visualization and how it is implemented nowadays. The project proposes two kind of improvements: graphics that automatically changes when data gets updated and integration of external data to include information from knowledge databases. This project creates a prototype that uses both data that evolves over time and data from other resources. It is created around the topic of migration, enabling users to view migrations in a map and filter those movements with filters like "migrations that happened from poor to rich countries". The prototype uses migration data stored in an accessible database combined with data about countries extracted from Wikidata. The visualization also gets updated automatically if the sources change: for example when the metrics used to guess the richness/poorness of a country change. Contents 1 Introduction 1.1 Data and graphics that evolves over time . . . . . . . . . . . . . 1 1.1.1 The Weinstein scandal . . . . . . . . . . . . . . . . . . . . 2 1.1.2 The Panama Papers . . . . . . . . . . . . . . . . . . . . . 4 1.2 External sources and Linked Data . . . . . . . . . . . . . . . . 5 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Background 2.1 Ontag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Wikidata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 RDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2 SPARQL . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.3 Other data sources . . . . . . . . . . . . . . . . . . . . . . 12 3 Characterization of data 3.1 Data that evolves over time . . . . . . . . . . . . . . . . . . . . 14 3.2 Domain specific problems . . . . . . . . . . . . . . . . . . . . . 14 3.2.1 Partial information . . . . . . . . . . . . . . . . . . . . . . 15 3.2.2 Contradictory data . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Other problems . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 Development 4.1 Pre-design and high-level design . . . . . . . . . . . . . . . . . 19 4.2 Top-level design . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3 Design and implementation . . . . . . . . . . . . . . . . . . . . . 22 5 Results 5.1 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.1.1 Idempotent testing . . . . . . . . . . . . . . . . . . . . . . 27 5.1.2 Characterization testing . . . . . . . . . . . . . . . . . . . . 27 5.2 User interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6 Conclusions and Future work 6.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 # API reference A.1 Entity recognition ........................................ 36 A.1.1 Global functions ........................................ 36 A.1.2 Recognizer instance methods ......................... 36 A.2 Frontend .................................................... 37 A.3 Query ........................................................ 39 A.3.1 Example ................................................ 39 Chapter 1 Introduction In the last few years, data visualization and journalism became together forming a new discipline: data-driven journalism. Authors like J. Gray et al. [1] and C. W. Anderson [2] explain the concept of data journalism and its importance. In addition, collections of data have become available online (e.g. Open Government Data) and open source tools allow analyzing and visualizing the data even with little knowledge of information technology [3]. This gives journalists access to new types of data and creation of more complex and data driven visualizations, both to tell a story in better ways and. It also provides help to journalists to understand the data they handle. This Project focuses on a specific type of data and the visualization of it, which is the data that evolves over time—which leads to graphics that evolves over time—, explained through two examples used in journalism. It is also questioned how can this be improved and for this improvement, two ideas are suggested: the automatization of the evolution of data and its graphics, and, the incorporation of external data (through Linked Data) that can also change over time. 1.1 Data and graphics that evolves over time The data managed in this project are that data that changes over time. In short, this type of data is characterized for “giving different answers to the same question depending on the time the question is stated”. This could happen two scenarios. 1. **The question implies time.** For example, if the question like “Who is the winner of the last Tour de France?” the answer is different depending on the year because of the annual periodicity of the tournament. 2. **The question does not imply time.** For example a question like “Who is the winner of the Tour de France in 2014?” the answer is apparently fixed. However, years after the initial announcement of the winner, as a result of a doping scandal, a different person could be declared as the winner of the tournament. In both scenarios there is a problem about the reliability of the data. The source might be corrupted or the data might be not properly updated. The correctness of the data is out of the scope of this project. However, as long as the available information is updated and correct, the project is able to represent it properly. Some of the problems are addressed and corrected or, at least discovered. Two examples of usage of data and graphics that evolve over time are shown below. 1.1.1 The Weinstein scandal This example is an article published on Univision\(^1\) that talks about the impli- cations of the “#meeToo movement”, an International movement against sexual harassment and assault spread virally in 2017 as a hashtag used on social media to help demonstrate the widespread prevalence of sexual assault and harassment, especially in the workplace. It shows a list of news about famous people report- ing sexual harassment. The list of news has a graph in its left side that shows some numbers: When the user scrolls down, the list of news scrolls but the graph remains its position displaying different information depending on the position of the scroll. The list displays two types of graphics: (i) below the photo of the harasser, the number of people been harassed according to the article that is aligned with the graphic, and, (ii) the total number of people that has been harassed according to all the articles from the beginning until the aligned one (figures 1.1, 1.2 and 1.3) Figure 1.1: Screenshot of the Univision article with the scroll on top \(^1\)See \url{http://uni.vi/z7ar100VHVT} When the graph is aligned with the first article, it shows that 8 people are harassed according to that article, and in total 8 people are harassed. (figure 1.1) When the graph is aligned with the second article, it shows that 2 people are harassed according to that second article, and that 8+2 sums 10, the number of people harassed according to both first and second articles (figure 1.2). When the user scrolls to the end of the page, the graph shows the total number of people that has been reported as harassed in total (figure 1.3). As more articles are added, the number would change making that data non-constant. 1.1.2 The Panama Papers This example is an ICIJ article\(^2\) that shows the relationship between the people that have close relationships with Donald Trump —the president of the United States of America in 2018— and the “Panama Papers” scandal\(^3\). It shows a graphic with Donald Trump in the center and lines that end in circles that are the people close to him (figure 1.4). ![Figure 1.4: Screenshot of the ICIJ article when the user enters to the page](Image) When the user click one of the people, it shows a biography in the right side and a graph with the connections described in the biography in the left side (figure 1.5). As the user scrolls through the different parts of the biography in the right, the graphic in the left changes showing the information that is written in the part that the user is reading (figure 1.6). The information includes relationships between people and organizations (private companies and public organizations). ![Figure 1.5: Screenshot of the ICIJ article when user clicks on “Randal Quarles”](Image) --- \(^2\)See [https://projects.icij.org/paradise-papers/the-influencers/#/](https://projects.icij.org/paradise-papers/the-influencers/#/) \(^3\)The Panama Papers are 11.5 million leaked documents that detail financial and attorney–client information for more than 214,488 offshore entities. 1.2 External sources and Linked Data This Project explores the inclusion of data from multiple sources that enables the access to more data and the creation of more meaningful visualizations using those data. Specifically, this project uses two types of data: internal data and external data. Internal data are strings extracted from news articles with semantics annotations. This extraction is done with the tool Ontag described in Section 2.1. In Ontag, data is curated and validated by the community. With the semantic annotations it is possible to join the data with external sources like Wikidata. To avoid contradictions between internal and external data, different information is extracted from one and other source. In case of having more than one external source it is necessary to have a mechanism to address conflicts between sources (either choosing one over another or having some aggregation). The implementation of this mechanism is out of the scope of the project. The project also assumes that all the data are facts: the sources have their own mechanisms to guarantee its correctness before inserting them into the system. These improvements can be applied in the examples shown in section 1.1. One example of improvement using external sources in the “Weinstein scandal” scenario could be retrieve the data from various news sources or complementing the information with other databases like IMDb\(^4\), the Internet Movie Database and discover in which cases the harasser and victim worked in the same movie. In the “Panama Papers” scenario, a developer might want to retrieve information of the people involved from general knowledge databases like Wikipedia or other databases like Data.gov\(^5\), the collection of datasets published by the Government of the United States. The major requirement when dealing with this type of data (data from different sources) is to be able to connect the different databases together. Linked Data is a concept aimed to solve this problem. Linked Data is about employing two technologies: (i) Resource Description Framework (RDF) — a family of specifications of the W3C (see [4]) — to describe and model information, and, (ii) the Hypertext Transfer Protocol (HTTP) to publish structured data on the Web and to connect data between different data sources, effectively allowing data in one data source to be linked to data in another data source [5]. The principles of Linked Data were defined by Tim Berners-Lee in 2006 [6] and its guidance has been extended by documents like [7] that provides recipes on which publishing systems can be based. The mechanisms and technologies behind Linked Data and the usage in this particular Project are discussed in Section 2.2.1. 1.3 Contributions 1. Design a system to create graphics that changes automatically as data evolves. 2. Integrate external data sources with stored data. \(^4\)See https://imdb.com \(^5\)See https://data.gov Chapter 2 Background The solution of visualizing data that evolves over time and data from different sources is proposed in this Project through the development of a software prototype. The prototype involves the design and development of a software that shows relationships between migration movements and the properties of the places where those migrations happen. The prototype takes those information from different sources: (i) migration movements are taken from Ontag and they are considered the internal data of the project. (ii) properties of places where the migrations happen are taken from Wikidata and they are considered the external data of this project. ![Diagram of External and Internal Data](image) Figure 2.1: External and internal data in this project 2.1 Ontag Ontag\(^1\) is a tool that converts news articles into machine-readable data. It is promoted and developed by Common Action Forum in collaboration with the Ontology Engineering Group of the Technical University of Madrid. Ontag works joining the concepts of question, tag, annotation and answer in four steps: \(^1\)See [https://ontag-face.herokuapp.com](https://ontag-face.herokuapp.com) 1. **Create the question.** The community creates questions with periodical relevance. For example: *Describe the migration flow of refugees.* 2. **Tag the question.** The author of the question creates tags, which are the structure that answers of the question should have. For example, the question may have the tags: *place of origin, destination, amount, date.* 3. **Propose content.** Users propose content that may answer the question. For example, *news articles.* 4. **Highlight the content.** Users highlight parts of the content creating annotations. Then, users put the question tags on the annotations. For example, in an article, a user can highlight *Syria* and tag it with *place of origin*; highlight *Lesbos* and put the tag *destination* and so on. All the annotations (with the tags) can be group together to form an answer for the question. See figure 2.2. ![Figure 2.2: How data are related in ontag](image) The data in Ontag is stored as text and can be read from a public API. The relevant endpoint for this project is **GET /answers**. It gives a list of answers, where each answer is a list of annotations. ```json { id: 3, question_id: 1, annotations: [ {text: 'Syria', tag: 'origin'}, {text: 'Lesbos', tag: 'destination'}, {text: '38760', tag: 'amount'} ] } ``` 2.2 Wikidata As a source of “properties of places”, this Project uses Wikidata. Wikidata is a free and open knowledge base that can be read and edited by both humans and machines. Wikidata acts as central storage for the structured data of its Wikimedia sister projects including Wikipedia, Wikivoyage, Wikisource, and others [8]. The human-readable part of wikipedia are HTML pages, each one describing a concept and readable as a physical encyclopedia. To make the data computer-readable, Wikidata implements the principles and technologies of Linked Data. The term Linked Data was coined by Tim Berners-Lee. He outlined four principles of linked data: [6] 1. Use URIs as names for things. 2. Use HTTP URIs so that people can look up those names. 3. When someone looks up a URI, provide useful information using the standards. 4. Include links to other URIs, so that they can discover more things. For this Project it is relevant to know how data are conceptually stored and how can data be read. Data are stored implementing RDF and they can be read using SPARQL. The following section (2.2.1) only describes RDF as a concept. The actual implementation of both RDF and SPARQL is not covered here and it is not relevant for this Project. 2.2.1 RDF Resource Description Framework (RDF) is a family of specifications of the World Wide Web Consortium (See [4]) used to describe and model information. This section explains how a page in Wikidata describing Douglas Adams is transformed into computer-readable data conformed to the RDF specs. The article in Wikidata about Douglas Adams contains (among others) the information shown in the Table 2.1: <table> <thead> <tr> <th>Douglas Adams</th> </tr> </thead> <tbody> <tr> <td>Native language</td> </tr> <tr> <td>Place of birth</td> </tr> <tr> <td>Educated at</td> </tr> </tbody> </table> Table 2.1: Human readable information about Douglas Adams In RDF all the information is stored in triples. Every triple is a subject-predicate-object tuple. The information shown in table 2.1 is equivalent to the triples shown in table 2.2 where each row is a triple. \footnote{See https://wikidata.org/wiki/Q42} Table 2.2: Information about Douglas Adams expressed in triples <table> <thead> <tr> <th>Subject</th> <th>Predicate</th> <th>Object</th> </tr> </thead> <tbody> <tr> <td>Douglas Adams</td> <td>Native language</td> <td>British English</td> </tr> <tr> <td>Douglas Adams</td> <td>Place of birth</td> <td>Cambridge</td> </tr> <tr> <td>Douglas Adams</td> <td>Educated at</td> <td>St John’s College</td> </tr> </tbody> </table> Then, following the principle of Linked Data that says that all the URIs are used as names for things, every concept (thing) should be identified by an URI as shown in the table 2.3: <table> <thead> <tr> <th>Concept</th> <th>URI</th> </tr> </thead> <tbody> <tr> <td>Douglas Adams</td> <td><a href="https://wikidata.org/wiki/Q42">https://wikidata.org/wiki/Q42</a></td> </tr> <tr> <td>British English</td> <td><a href="https://wikidata.org/wiki/Q7979">https://wikidata.org/wiki/Q7979</a></td> </tr> <tr> <td>Cambridge</td> <td><a href="https://wikidata.org/wiki/Q350">https://wikidata.org/wiki/Q350</a></td> </tr> <tr> <td>Place of birth</td> <td><a href="https://wikidata.org/wiki/Property:P19">https://wikidata.org/wiki/Property:P19</a></td> </tr> </tbody> </table> Table 2.3: Concepts as URIs It is important to note that the predicates in the triples (“Native language”, “place of birth”, “educated at”) are also concepts and because of this, they are identified by URIs. In conclusion, RDF represents data in triples, where each element that is not a simple datatype (number, boolean, string) is identified by an URI. 2.2.2 SPARQL SPARQL is an RDF query language, that is, a semantic query language for databases, able to retrieve and manipulate data stored in RDF format. SPARQL allows for a query to consists of triple patterns, conjunctions, disjunctions and optional patterns [9]. SPARQL queries allow to query data from a triples database. The queries can search for triples given any part of them. For example, knowing the URI for “Douglas Adams” (written in the code as wd:Q42) and the URI for “Native language” (wdt:P103), it is possible to perform a query to look for the object of triples where subject is “Douglas Adams” and predicate is “Native language”. This query, in SPARQL language is: ```sparql SELECT ?language WHERE { wd:Q42 wdt:P103 ?language } ``` This returns the language “British English” bound to the variable “?language” defined in the query. It is possible also to make queries that return more than one result. The following query returns all the cities stored in Wikidata, or equivalently, all the subjects (bound to the variable “?city”) of triples where predicate is “instance of” and object is “city” (In the code shown below, for simplification, the actual URIs for “instance of” and “city” are replaced by “wdt:instance_of” and “wdt:city” respectively). ``` SELECT ?city WHERE { } ``` The result is a list of all the cities of the world (table 2.5 shows 5 elements of the actual list returned by Wikidata contains more than 11000 elements). <table> <thead> <tr> <th>?city</th> </tr> </thead> <tbody> <tr> <td>Berlin</td> </tr> <tr> <td>London</td> </tr> <tr> <td>Toronto</td> </tr> <tr> <td>Nuuk</td> </tr> <tr> <td>Vatican City</td> </tr> <tr> <td>...</td> </tr> </tbody> </table> Table 2.5: Extract of the list of “All cities in the world” returned by wikidata It is possible to make more complex queries to retrieve at the same time a list of all the countries in the world and some data of those countries like the GDP per capita or its country code. Knowing the URIs of the correct terms (shown in table 2.6), the following code will return a table of all the countries in the world with its GDP per capita and its 2-digit country code. The result of the query is shown in table 2.7. ``` SELECT ?country, ?countryCode, ?gdp WHERE { ?country wdt:P31 wd:Q3624078. } ``` ### Table 2.6: Concepts and their URIs in Wikidata <table> <thead> <tr> <th>Concept</th> <th>URI</th> </tr> </thead> <tbody> <tr> <td>Instance of</td> <td>wd:P31</td> </tr> <tr> <td>Sovereign country</td> <td>wd:Q3624078</td> </tr> <tr> <td>ISO 3166-1 alpha-2 code</td> <td>wdt:P297</td> </tr> <tr> <td>GDP per capita</td> <td>wdt:2299</td> </tr> </tbody> </table> ### Table 2.7: Countries, country codes and GDP <table> <thead> <tr> <th>?country</th> <th>?countryCode</th> <th>?gdp</th> </tr> </thead> <tbody> <tr> <td>Canada</td> <td>CA</td> <td>45066</td> </tr> <tr> <td>Ireland</td> <td>IE</td> <td></td> </tr> <tr> <td>Spain</td> <td>ES</td> <td>33629</td> </tr> <tr> <td>Luxemburg</td> <td>LU</td> <td>101926</td> </tr> </tbody> </table> For this query, three properties are used as examples: *Sovereign country*, *ISO 3166-1 alpha-2 code* and *GDP per capita*. Later on the Project (see section 4.1) when the actual properties are used to make the prototype, a proper definition will be given. ### 2.2.3 Other data sources The external sources for this Project could be another database. Wikidata is a general knowledge database and not special domain. This means that Wikidata can offer a broad knowledge on diverse areas but not deep knowledge on any of them. For the scope of this project, and the target of the application, the knowledge offered by this type of database is enough. Another database that was taken into consideration was DBPedia\(^3\). DBpedia is a “crowd-sourced community effort to extract structured content from the information created in various Wikimedia projects. This structured information resembles an open knowledge graph (OKG) which is available for everyone on the Web” \([10]\). The main difference between DBPedia and Wikidata is how concepts are defined in each. Since DBPedia is a knowledge base extracted from Wikipedia and Wikipedia is multi-lingual (different languages have different versions of wikipedias), DBPedia results in a multi-lingual knowledge base. In the other hand Wikidata is a unique knowledge base where each defined concept can have multiple “labels” (one per language) associated to that concept. Because of this, in DBPedia, the same concept (i.e. “Greece”) may have different URIs (one for each Wikipedia article). All the URIs are linked to each other by a property “same-as”. --- \(^3\)See [https://wiki.dbpedia.org](https://wiki.dbpedia.org) Even with the efforts to unify concepts and URIs (explained by Kontokostas et al. in [11]), for this Project, the approach taken by DBPedia is more problematic and Wikidata is preferred. However, having in mind this issue, DBPedia is a great alternative that offers more quantity of information than Wikidata. Further and detailed comparison of more knowledge databases are done in other publications like [12] which also compares Wikidata and DBPedia with other services like YAGO\textsuperscript{4}, Freebase\textsuperscript{5} and OpenCyc\textsuperscript{6}. \textsuperscript{4}See http://yago-knowledge.org \textsuperscript{5}See https://freebase.com \textsuperscript{6}See http://www.cyc.com/opencyc/ Chapter 3 Characterization of data At this point of the Project it is important to define properly what are the kind of data that is faced (data that evolves over time) and do an analysis on the potential problems that can arise with these data in general and within the domain of the prototype in particular. 3.1 Data that evolves over time Data that evolves over time could be defined as data “that gives different answers to the same question depending on the time the question is stated”. This could happen two scenarios. 1. The question implies time. For example, if the question like “Who is the winner of the last Tour de France?” the answer is different depending on the year because of the annual periodicity of the tournament. 2. The question does not imply time. For example a question like “Who is the winner of the Tour de France in 2014?” the answer is apparently fixed. However, years after the initial announcement of the winner, as a result of a doping scandal, a different person could be declared as the winner of the tournament. In both scenarios there is a problem about the reliability of the data. The source might be corrupted or the data might be not properly updated. The correctness of the data is out of the scope of this project. However, as long as the available information is updated and correct, the project is able to represent it properly. Some of the problems are addressed and corrected or, at least discovered. 3.2 Domain specific problems The system queries data with information about migration movements between places. The table 3.1 is an example of an entry. Table 3.1: Example of migration data taken from Ontag <table> <thead> <tr> <th>Origin</th> <th>Destination</th> <th>Date range</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>Syria</td> <td>Lesbos</td> <td>2015-01-01 to 2015-06-30</td> <td>38 760</td> </tr> </tbody> </table> The system has to query data following certain criteria. In this operation there are some problems that may happen depending on the algorithm used in each operation. For all the problems described in this section, the developer that implements the system should also ensure that the correct data is returned when queried under those circumstances. ### 3.2.1 Partial information In some cases, the query matches partially with the data. For example, if a query is “read all the migration that happened in 2016” and the stored data is the data of table 3.2: <table> <thead> <tr> <th>Origin</th> <th>Destination</th> <th>Date range</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>AB</td> <td>C</td> <td>2015-12-20 to 2016-03-20</td> <td>5000</td> </tr> <tr> <td>CD</td> <td>AB</td> <td>2015-12-20 to 2017-03-02</td> <td>10000</td> </tr> </tbody> </table> Table 3.2: Example of partial information problem Some implementations might ignore both data because the date range is out of 2016, which is the most restrictive approach. However, others may try to interpolate and calculate how many people among the people in both rows corresponds to year 2016. ### 3.2.2 Contradictory data In some cases, different registers show contradictory information. In the example in table 3.3, the same migration from A to B is happening at the same time but different amount of people is doing the migration. <table> <thead> <tr> <th>Origin</th> <th>Destination</th> <th>Date range</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>B</td> <td>2015-12-20 to 2016-03-20</td> <td>10000</td> </tr> <tr> <td>A</td> <td>B</td> <td>2015-12-20 to 2016-03-20</td> <td>1000</td> </tr> </tbody> </table> Table 3.3: Example of contradictory data The most restrictive implementation discards both data and also includes tests to detect this types of contradiction. However, other implementations may try to extract a conclusion from these registers, for example, returning the average amount. Sometimes, the information could be partially contradictory. Consider the example in table 3.4 where the date ranges of rows (1) and (2) overlap some days. <table> <thead> <tr> <th>Origin</th> <th>Destination</th> <th>Date range</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>B</td> <td>2015-12-20 to 2016-03-20</td> <td>10000</td> </tr> <tr> <td>A</td> <td>B</td> <td>2016-02-20 to 2016-08-20</td> <td>1000</td> </tr> </tbody> </table> Table 3.4: Example of partially contradictory data The most restrictive implementation would discard both data in case of a query including them together. However, if the query is performed to get only migrations in 2016, some implementations would either take or not the second row. Algorithms that try to aggregate the data to return calculated data should be designed to consider queries that include one, the other or both rows and other cases where more than two rows are partially contradictory. Other types of contradiction are harder to detect. Consider the data in table 3.5: <table> <thead> <tr> <th>Origin</th> <th>Destination</th> <th>Date range</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>Paris</td> <td>2015-12-20 to 2016-03-20</td> <td>10000</td> </tr> <tr> <td>A</td> <td>France</td> <td>2015-12-20 to 2016-03-20</td> <td>1000</td> </tr> </tbody> </table> Table 3.5: Example of semantic contradiction In this case, both data are not contradictory from a strict formal point of view. However, it is not possible that more people move from the same place (A) to Paris than to France as Paris is part of France. This type of contradiction needs a deep knowledge of the database and skills including advanced entity recognition that are out of the scope of this Project. The implementations that are mentioned but are not included in the Project have different implications: they might require a deep knowledge in statistics, geographics, anthropology or sociology among others. Some of them also opens ethical issues leading to mis-information or bias of the designer of the algorithm. All these implications are outside the scope of this project. ### 3.3 Other problems Working with data from external sources have mainly two problems: availability and reliability. Availability problems may happen with actual unavailability of the service or because of some network problem. A similar problem to this is performance. If the software must handle multiple requests and each of them take some amount of time, the result could be a bad performing system. Both problems can be solved by some solutions like implementing a cache. These solutions are out of the scope of this project. Reliability issues happen if the data from external sources is incomplete, contradictory or not true. It is completely out of the scope of this project to solve this issue. In this Project, it is assumed that all the information provided by all sources, i.e. Wikidata and Ontag, are facts. This is possible because those projects have their own methods to verify the information. Chapter 4 Development To design the software, the methodology chosen is an adapted version of incremental agile. The steps of this methodology are: Pre-design (including collection of user requirements), Design (high-level and low-level), Implementation and Test. 1. **Pre-design.** In this phase, a series of preconditions are set. These preconditions are chosen as limits of the prototype and include technical decisions: programming languages, tools, frameworks. Based on the limitations of the prototype, a number of interviews are conducted to users in order to have the point of view of the potential users of the system. The outcome of this phase is an *initial design of the application*. 2. **High-level design.** A design of the system is made based on the requirements taken from the previous phase. The design includes both software architecture design and user interface design. After this phase, the software is divided into parts that can be developed incrementally. After the pre-design and high-level design phases, a loop of the phases design, implementation and test is done for each part of the software: 1. **Low-level design.** Design of one part of the system. It include the details of the architecture and details of the user interface. 2. **Implementation.** The actual code for that part of the system is written in this phase. 3. **Test.** In this phase, all the written code is tested. First, it is tested using unit tests. Then, the integration with the existing parts of the software is also tested (integration tests). Finally, if necessary, the user interface is tested against real users. 4.1 Pre-design and high-level design The prototype of the system is a “dashboard”-like web application. The dashboard accepts two inputs from the user: a date and a filter. Based on those two inputs and having the migrations and countries data, the dashboard will show a visualization of the migration data that meets the filter chosen by the user. For example, a user might want to see migrations that happened in 2015 from poor to rich countries. In this example, 2015 is the date and from poor to rich countries is the filter. To discuss which filters are better to have in this prototype, a series of interviews are performed in order to have feedback from potential users. Three interviews are conducted in this phase. The interviewed people are: (i) a Professional Journalist working in a Non-Profit Organization, (ii) a Student of Master in Human Rights at Uppsala University and (iii) a Student of Bachelor in Peace and Development at Uppsala University. To all them, a brief explanation of the app is given with the stated question above. After that, possible filters are discussed. These are the filters that interviewed people found interesting to have: - **Languages spoken in a place.** Not only first language but also second and third. - **Form of government.** Monarchy, dictatorship, parliament... - **Climate.** Average temperature, average precipitation, number of natural catastrophes... - **Human Development Index (HDI).** It is an indicator that aggregates life expectancy, education and income per capita, which are used to rank countries. It is used to measure countries development by the United Nations Development Program. [13] - **Gross Domestic Product per capita made on basis of purchasing power parity** (or GDP (PPP) per capita) is the value of goods and services produced within a nation in a given year, converted to U.S. dollars divided by the population and adjusted for differences in the cost of living in different countries. [14] - **Country freedom** according to the Freedom in the World Report, which is a yearly survey and report made by the non-governmental organization Freedom House that measures, among others, the degree of political rights around the world. [15] - **Peacefulness of a country** depending on if the country is in a war. To choose which filters to include in the dashboard, a search in Wikidata is done to check which ones appear as properties on countries. Among those, two are discarded: languages spoken in a place, climate and peacefulness of countries. • Wikidata do not store any climate values in its page. It would be possible to implement this filter by consulting other databases like climate agencies. • Wikidata has information about the official languages spoken in a country. This data is different from the languages spoken by its population as it excludes the languages taught at school. • In Wikidata, ambiguous data are not correctly defined. Conflicts that do not have specific and objective starting and end dates are difficult to formalize and they are not present in Wikidata. After this, to be able to implement the filters, those are formalized in terms of the data about migrations. The definitions of the filters are: • **Filter by Human Development Index.** Movements such that HDI of the origin is less than 0.50 and the HDI of the destination is higher than 0.75. • **Filter by GDP (PPP) per capita.** Movements such that GDP (PPP) per capita value of the origin is less than the value on destination. • **Filter by country freedom.** Movements such that origin is a non-free country and the destination is a free country. The filter “form of government” is finally discarded given the complexity of formalizing it because of the numerous forms of governments around the world and their classification. ### 4.2 Top-level design To make the data to flow through the system from the beginning (migration data from Outag and data about countries from Wikipedia) to the end (the dashboard) with the inputs from the user, one more element is required: a way to link the places contained in Outag data (strings) with Wikidata concepts (URIs). To do this, an “entity recognition” module is placed in the system (see figure 4.1). In addition, the link between Ontag strings and Wikidata URIs is stored in a database\(^1\) (accessed via the “psql” module); the user inputs —year and filter— are grouped into a “query” module which is the responsible of reading the data from the database given the user inputs; finally, the dashboard is divided into web components. This data flow leads into a top level architecture of the system (figure 4.2), where the modules are grouped into different layers. --- \(^1\)The database chosen for this Project is PostgreSQL, a relational database. However, given that the data handled in the prototype is stored in a single table, there choice has no implications. data access layer only responsibility is to access —write and read— to external resources (Wikidata and psql) and the logic of operating with the data is handled by the other layers of the application. Further details of testing are in section 5.1. 4.3 Design and implementation Entity recognition The Entity recognition module has two functions: (a) a **recognition function** that transforms strings like “Paris” into concepts, and, (b) an **insertion function** that inserts the migration data with the places converted into URIs into the database. The latter function only calls the psql module in the data access layer and handle the possible errors on insertion. The recognition function is more complex and its sequence is shown is the following (also shown in figure 4.3): 1. The recognition function receives the input as a string. 2. It calls the `search()` function in the Wikidata module which searches the concept using the Wikidata search API. The query returns a list of URIs matching the string. 3. For each concept of the list, the `getType()` function is called which performs a query to Wikidata to get the type of the concept, specifically to guess if the concept is instance of Place or any subtype of Place. 4. Discard all the concepts that are not places and return the first element of the list. --- 3See https://www.wikidata.org/w/api.php?action=help&modules=wbsearchentities This recognition function is a very elementary implementation of a string-to-concept function. Its based on the assumption that the list of entities returned by the Wikidata search API is ordered being the first element the closest to the search string. The function would fail if the concept to be matched is not present in Wikidata, the concept is not correctly classified as “Place” and also if the search is performed for places with homonyms. It is also ignoring the context of the word. Covering all this means an advanced implementation of a Natural Language Processing function which is out of the scope of this Project. Query The query function returns the movements contained in the database giving a year and a filter. Its sequence is described below and shown in figure 4.4. Figure 4.4: Sequence of query function 1. The function receives two inputs: year and filter. 2. It calls the psql module which performs a query in the database to get the movements that happened in the input year. The psql module return a list of movements. 3. The query function calls the getCountryData() function from the Wiki-data module to get the data of a specific country (its Gross Domestic Product per capita based on PPP, its Human Development Index and whether the country is free or not). 4. The query function filter the movements having the actual filter input and the data returned by the previous step. 5. The query function returns the filtered movements. To enhance the performance of the system, the `getCountryData()` function called here is a good place to put a cache of the countries data retrieved from Wikidata. **Frontend components** The front-end part of the Web Interface is a component tree formed by several components: - **Dashboard** is the root component. This stores the internal state of all the application and also makes queries to the back-end. - **Map** shows a graphic representation of the data (a list of origin-destination-amount tuples). This component could have different children depending on how to represent the data. If the children needs a specific input, the transformation from the origin-destination-amount triple to that specific input is done in this component. For example, in the prototype, it has a *Cloropeth* component which is a map in which areas are shaded in proportion to some measurement. In this case, the cloropeth has two colours (red and blue) where red means “country with people moving out” and blue “country with people moving in”. The more saturated red or blue is a country, the bigger the amount of the people moving in/out. - **Date Picker** allows users to choose a date. - **Filter Selector**. With this component users can choose between the filter options: “Human Development Index”, “Poor to rich” and “Non-free to free”. Following the principle of single source of truth, the information that is relevant to one component is stored in its internal state. However, if that information affects to other components, it is stored in its common ancestor. For example, the retrieved data from the backend is stored in **Dashboard**. The date chosen by the user is also stored in **Dashboard** because that data is needed in both **Map** and **Date Picker** components. However, the zoom level which is only relevant in the map, is stored in the **Map** component. The figure 4.5 shows the steps taken by each component in the beginning, when the user access to the app. This example includes a Cloropeth component, which is a child of the Map component. 1. The Dashboard component performs a query to the backend, to the /movements endpoint to get all the movements done in 2016. The query for that is GET /movements?year=2016 2. The backend (the query module) responds with a list of all the movements done in 2016. An example of response is the following, representing movements among Syria, Morocco, Spain and France. ``` [ {origin: 'sy', destination: 'fr', amount: 10000}, {origin: 'sy', destination: 'es', amount: 3000}, {origin: 'mo', destination: 'es', amount: 12000}, {origin: 'mo', destination: 'fr', amount: 5000}, {origin: 'es', destination: 'fr', amount: 1000} ] ``` 3. The Dashboard component pass the response to the Map component. 4. The Map component takes the response and adapts it to data that matches with the inputs of the actual map, in this case the Cloropeth component. The result of this conversion will be an array of countries and how much population they earn/loss due to the migrations: {country: 'sy', amount: -13000}, {country: 'mo', amount: -15000}, {country: 'fr', amount: 16000}, {country: 'es', amount: 12000} ] 5. The Cloropeth component draws the map with the input of the previous step The figure 4.6 shows the steps taken by each component when the user chooses a different year (they click on a year). ![Sequence diagram when user selects a date](image) Figure 4.6: Sequence diagram when user selects a date 1. **A click is dispatched in the `<YearSelector>` component.** The `onClick` prop is called, which is actually a function passed by `Dashboard`. 2. **The `<Dashboard>` component** check if it has the data of the chosen year, stored. - If they are stored, steps 3 to 5 shown previously are taken. If not, all the steps described before are taken. Refer to Appendix A for the full API reference of all the modules of the system. Chapter 5 Results 5.1 Testing Given the specific constrains of the system (the type of data that is being managed and the user interaction with it), testing of the system is done by mixing idempotent tests and characterization testing. Not the parts of the system are being tested. 5.1.1 Idempotent testing The goal of these tests are to ensure the good functioning of the system. These tests must not test any external services and as a rule, every time tests are run, they must give the same results. Tested modules are the entity recognition module and the query module. Those modules depend on external elements (Wikidata and psql respectively). In the context of the tests, those external elements are mocked. This behaviour is easy to do since the program separated into layers as seen in the figure 4.1 of the section 4.2. The data access layer only purpose is to access to external data without doing any intermediate operation and it is easy to replace with a layer that simulates an external service for testing purposes. This kind of tests are useful to test the algorithms chosen by the developer. Specially to detect the problems addressed previously (see section 3.2). To test the integration of the system with the external services, instead of preparing a test suite comparing expected and returned results, an approach based on characterization testing is followed. 5.1.2 Characterization testing Characterization testing is a technique that consists in two steps. 1. In a first step, the test run the functions to be tested and their results are saved in the system. Before saving the tests, the results should be manually checkid and only saved if they are the expected ones. In case of having a version control system with the code of the program, the results are also part of it. 2. In a second step, the test run the functions again and this time the results are compared with the previously saved ones, raising errors if they are different. In short, this means that the results of the functions run in the step one are the “expected” results for the second step. All validations are done in this second step. If there is an error, the results should be checked manually to conclude that: (a) the returned results are not the expected ones so the error is correct or (b) the returned results are valid and the saved version must be updated for future testing. These tests are slow to run because they make actual queries to the external services. Also, like the integration tests mentioned before, those tests can fail due to changes in the services (their API, the implementation) and other external causes (network loss, bad configuration, etc.). The intention of these tests are not to ensure the good functioning of the system. The tests also do not detect any errors in the system which contradicts in some way the intention of any software testing. Even having all the mentioned drawbacks, characterization testing is useful to –more or less– ensure that under certain circumstances the system behaves in the same way. It is also an approach to test that the external services have updated their data, a specific thing that is relevant in this Project. 5.2 User interaction When the user enters to the system, they see the dashboard divided in three regions: a map in the center occupying almost all the screen; the filter selector in the right and a year selector in the bottom (See figure 5.1). By default the chosen year is “2017” and the selected filter is “all” meaning that the map is showing all the movements that happened during 2017. Then, the user can choose another filter, for example, “from non-free to free” and the map would show only the movements that happened from non-free to free countries as shown in figure 5.2. If the user chooses a “poor to rich” filter, it shows the movements that match with the GDP (PPP) per capita filter (figure 5.3). Some results may look strange since this filter shows movements from “poorer to richer countries” meaning that a movement from a “poor” country to a “not-so-poor” country is included in this filtering. If the user chooses the “low HDI to high HDI” filter in year 2017 (figure 5.4), the map is completely blank because Wikidata does not offer any data about HDI in 2017. Notice that, if Wikidata gets an update and it include the HDI of the countries for 2017, the map would show the movements correctly without any human manipulation needed. By choosing another year, for example 2014, (figure 5.5), the map shows the migrations that match with the filter criteria. Figure 5.5: Map showing movements in 2014 from low HDI to high HDI countries Finally, the user can click on a country to display only the movements from and to that country, for example, the United States (figure 5.6). Figure 5.6: Map showing movements from and to the U.S. Chapter 6 Conclusions and Future work 6.1 Future work This Project opens opportunities to expansions in various directions, some of them making small differences to it and others making more deep changes. Some of those directions are: - Implement technical enhancements like different levels of cache or other performance improvements - Change the components in the frontend to visualize data in different ways, maybe including different types of maps or graphs that are not maps at all. - Add a layer of customization, letting the users to “modify” the criteria of the filters, for example letting them to decide what are the limits for HDI to be considered low or high. - Use other properties found in Wikidata to make more filters. Formalize and implement the ones proposed in the design chapters. - Use the properties in Wikidata in other ways like grouping countries by continent and be able to visualize not only country-to-country movements but also continent-to-continent or similar. - Use different external sources: other general knowledge databases or other domain-specific databases to obtain other knowledge. - Use other entity recognition system to link strings to concepts, or go further and not recognize only strings but images or another type of media. 6.2 Conclusions This Project made possible to create graphics to be transformed automatically when data from external resources change. It also finished with a tool that could be useful for journalists. Several technical and non-technical skills that are needed to make this type of project possible. It also involves ethical and social issues that are not possible to solve from the Computer Science. As an example, depending on the “definition of country” that the developer chooses, it could result in different results and in sending wrong data to the users. This Project put an emphasis on the usage of structured data, but these type of issues and ambiguous definitions need to be taken into account carefully, specially in cases where definitions are ambiguous on purpose. This Project is only a small approach into the topic. The Project solve a problem and in the journey of solving it, it discovers more problems, some of them with complex solutions and some of them unsolved. Bibliography Appendix A API reference A.1 Entity recognition The API of the Entity recognition module allows users to introduce information about migration (the origin and destination of the migration, the amount of people that perform that migration and the date range when the migration has been done). Both origin and destination are provided as strings and stored as concepts. The transformation from string to concept is also performed by this module. A.1.1 Global functions Recognizer([options]) Returns an instance of Recognizer. A.1.2 Recognizer instance methods r.recognize(text, [type]) Perform a search of the text in Wikidata and retrieve an array of all the possible concepts that are close to that text. Parameters: - String text. The text to look for. - String type optional. Accepts the value “place”. If specified, it returns only the concepts that are actually places. r.insert(data, [sources]) Insert an information about a migration into the database. Accepts two parameters: • **Object data** is an object with 5 fields containing the information to be inserted. The fields are: – **String origin.** The origin of the migration – **String destination.** The destination of the migration – **Number amount.** The amount of people that perform the migration. – **Date startDate.** The initial date when the migration happened. – **Date endDate.** The end date when the migration happened. • **Object sources optional** is an object with 5 fields containing references to the locations where the data are found. These fields should be compliant to the W3C Annotation standard and includes information referencing the URI, position and similar information. The 5 fields have the same names as the fields in the `data` parameter and each correspond to the source of each data. ### A.2 Frontend The components in the frontend part are implemented using the React framework. React is a framework that allows the creation of web components in JavaScript. Each component have inputs (so called *props*) and an internal information (called *state*). All components are part of a Component Tree which is similar to the resulting DOM tree after rendering all the Component Tree. Props are used to pass information through the tree downwards to its child(ren). Props can be also functions that act as callbacks. For example, a Button component may specify an onClick prop which is a function that is called when the button is clicked. React also allows the creation of a *context*, information that is passed down to all the components of the Component Tree. **Dashboard component** Is the root component of all the application. It is *stateful*. It makes queries to the backend and stores internally the data needed across its children. It process the data and pass it to its children components. This component has no props. This component, internally, stores the fetched data from the backend (i.e. vectors of movements) and has methods to fetch data and handle errors (e.g. connection errors). This component also stores the user choices that are relevant across the entire application: the chosen year. **Map component** Render a Map given vectors of movements (origin-destination-amount tuples). This component has one prop: • **Array movements** an array of objects with three fields: – **Object origin** a “Country” object representing the origin of the migration – **Object destination** a “Country” object representing the destination of the migration – **Number amount** the amount of people that move from “origin” to “destination” The **Country** object is an object that represents a country. It has two fields: • **String code** a two letters country code. • **String name** the name of the country The user can choose a single country. In that cases, only the movements from/to that country are shown. The chosen country is stored in this component. This component filter the movements according to this criteria. **Cloropeth component** Renders a choropleth map of the Earth. It paints countries using a bi-polar color progression. Countries with negative values are painted in blue and countries with positive values are painted in red. This component has one prop: • **Array series** which is an array with the information of countries and a number to represent. Each element is an object with three fields: – **String code** is a two letters country code. – **String name** is the name of the country – **Number amount** is the amount that has to be represented in the map. • **String selectedCountry**. If specified, this country is “highlighted”. • **Function onSelect**. This function is called when a country is clicked on the map. The function should has one argument – **String country**. The country code of the clicked one. This component is also stateful. It fetches GeoJSON data from an external site to get the polygons of the shape of the World map and save it as internal state. This operation is done only the first time the component is rendered. In this way, no more HTTP requests are necessary even if the props change. **Table component** Renders a table of countries and a number associated to each country. It is a table version of the cloropeth. It’s mainly created for debugging purposes. It has the same props as the Cloropeth component. RangeSelector component Renders a date selector. The user can choose what year to represent. Props: - Function `onChange(selectedYear)`. This function is called when the user chooses a different year. The function has one argument: - `Number selectedYear`. The selected year in 4-digits format. A.3 Query It is an HTTP method under the `GET /movements` endpoint. Path parameters: - `year`. Four-digits year. Returns the data only of a certain year period. - `Optional filter`. If specified, return only the data that satisfies the filter. It accepts the values `hdi`, `gdppp` and `free`. It returns an array of `movements`, where a movement is an object with three fields: origin, destination and amount. The Query module perform the following operations 1. Reads, from the internal database, the migrations happened in the specified year. 2. Makes a query to Wikidata in order to get the places where the filter match, e.g. a list of origin countries and a list of destination countries. 3. From the results read from the database, filter the results (e.g. maintain in the list if the migration is from a country included in the list of origin countries). 4. Aggregate the results. In all the steps, there are some edge-cases and non correct outputs that may happen depending on the quality of the data. This is discussed in Section 3.2. A.3.1 Example Get the migrations in 2016: `GET /movements?year=2016` Returns ``` [ {origin: 'es', destination: 'fr', amount: 10000}, {origin: 'mo', destination: 'es', amount: 15000}, {origin: 'mo', destination: 'fr', amount: 8000}, {origin: 'sy', destination: 'fr', amount: 20000}, {origin: 'sy', destination: 'es', amount: 3000} ] ```
{"Source-Url": "http://uu.diva-portal.org/smash/get/diva2:1275301/FULLTEXT01.pdf", "len_cl100k_base": 13736, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 81283, "total-output-tokens": 16440, "length": "2e13", "weborganizer": {"__label__adult": 0.00029969215393066406, "__label__art_design": 0.001247406005859375, "__label__crime_law": 0.0002315044403076172, "__label__education_jobs": 0.0015077590942382812, "__label__entertainment": 0.00010728836059570312, "__label__fashion_beauty": 0.0001360177993774414, "__label__finance_business": 0.00016176700592041016, "__label__food_dining": 0.0002760887145996094, "__label__games": 0.0005116462707519531, "__label__hardware": 0.0005602836608886719, "__label__health": 0.0001690387725830078, "__label__history": 0.00037479400634765625, "__label__home_hobbies": 6.747245788574219e-05, "__label__industrial": 0.0001951456069946289, "__label__literature": 0.0003352165222167969, "__label__politics": 0.0002429485321044922, "__label__religion": 0.0002713203430175781, "__label__science_tech": 0.010955810546875, "__label__social_life": 9.065866470336914e-05, "__label__software": 0.0125885009765625, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00013780593872070312, "__label__transportation": 0.00028705596923828125, "__label__travel": 0.00014865398406982422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60270, 0.03332]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60270, 0.80853]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60270, 0.8757]], "google_gemma-3-12b-it_contains_pii": [[0, 57, false], [57, 57, null], [57, 1145, null], [1145, 1145, null], [1145, 3137, null], [3137, 3562, null], [3562, 5461, null], [5461, 7178, null], [7178, 7803, null], [7803, 9148, null], [9148, 10369, null], [10369, 12098, null], [12098, 13278, null], [13278, 14604, null], [14604, 16720, null], [16720, 18839, null], [18839, 20263, null], [20263, 22497, null], [22497, 23204, null], [23204, 24815, null], [24815, 26916, null], [26916, 29374, null], [29374, 29861, null], [29861, 31508, null], [31508, 34040, null], [34040, 35742, null], [35742, 36413, null], [36413, 37821, null], [37821, 39287, null], [39287, 41364, null], [41364, 42353, null], [42353, 43226, null], [43226, 44869, null], [44869, 46818, null], [46818, 47342, null], [47342, 47807, null], [47807, 48083, null], [48083, 49358, null], [49358, 50349, null], [50349, 52183, null], [52183, 53203, null], [53203, 54198, null], [54198, 56477, null], [56477, 58570, null], [58570, 60270, null]], "google_gemma-3-12b-it_is_public_document": [[0, 57, true], [57, 57, null], [57, 1145, null], [1145, 1145, null], [1145, 3137, null], [3137, 3562, null], [3562, 5461, null], [5461, 7178, null], [7178, 7803, null], [7803, 9148, null], [9148, 10369, null], [10369, 12098, null], [12098, 13278, null], [13278, 14604, null], [14604, 16720, null], [16720, 18839, null], [18839, 20263, null], [20263, 22497, null], [22497, 23204, null], [23204, 24815, null], [24815, 26916, null], [26916, 29374, null], [29374, 29861, null], [29861, 31508, null], [31508, 34040, null], [34040, 35742, null], [35742, 36413, null], [36413, 37821, null], [37821, 39287, null], [39287, 41364, null], [41364, 42353, null], [42353, 43226, null], [43226, 44869, null], [44869, 46818, null], [46818, 47342, null], [47342, 47807, null], [47807, 48083, null], [48083, 49358, null], [49358, 50349, null], [50349, 52183, null], [52183, 53203, null], [53203, 54198, null], [54198, 56477, null], [56477, 58570, null], [58570, 60270, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60270, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60270, null]], "pdf_page_numbers": [[0, 57, 1], [57, 57, 2], [57, 1145, 3], [1145, 1145, 4], [1145, 3137, 5], [3137, 3562, 6], [3562, 5461, 7], [5461, 7178, 8], [7178, 7803, 9], [7803, 9148, 10], [9148, 10369, 11], [10369, 12098, 12], [12098, 13278, 13], [13278, 14604, 14], [14604, 16720, 15], [16720, 18839, 16], [18839, 20263, 17], [20263, 22497, 18], [22497, 23204, 19], [23204, 24815, 20], [24815, 26916, 21], [26916, 29374, 22], [29374, 29861, 23], [29861, 31508, 24], [31508, 34040, 25], [34040, 35742, 26], [35742, 36413, 27], [36413, 37821, 28], [37821, 39287, 29], [39287, 41364, 30], [41364, 42353, 31], [42353, 43226, 32], [43226, 44869, 33], [44869, 46818, 34], [46818, 47342, 35], [47342, 47807, 36], [47807, 48083, 37], [48083, 49358, 38], [49358, 50349, 39], [50349, 52183, 40], [52183, 53203, 41], [53203, 54198, 42], [54198, 56477, 43], [56477, 58570, 44], [58570, 60270, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60270, 0.10319]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
26623fecefc3cb567a5d548b77d6c708ba81bcbb
Learning and Evaluating Contextual Embedding of Source Code Aditya Kanade*1 2 Petros Maniatis*2 Gogul Balakrishnan2 Kensen Shi2 Abstract Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline. 1. Introduction Modern software engineering places a high value on writing clean and readable code. This helps other developers understand the author’s intent so that they can maintain and extend the code. Developers use meaningful identifier names and natural-language documentation to make this happen (Martin, 2008). As a result, source code contains substantial information that can be exploited by machine-learning algorithms. Indeed, sequence modeling on source code has been shown to be successful in a variety of software-engineering tasks, such as code completion (Hindle et al., 2012; Raychev et al., 2014), source code to pseudo-code mapping (Oda et al., 2015), API-sequence prediction (Gu et al., 2016), program repair (Pu et al., 2016; Gupta et al., 2017), and natural language to code mapping (Iyer et al., 2018), among others. The distributed vector representations of tokens, called token (or word) embeddings, are a crucial component of neural methods for sequence modeling. Learning useful embeddings in a supervised setting with limited data is often difficult. Therefore, many unsupervised learning approaches have been proposed to take advantage of large amounts of unlabeled data that are more readily available. This has resulted in ever more useful pre-trained token embeddings (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017). However, the subtle differences in the meaning of a token in varying contexts are lost when each word is associated with a single representation. Recent techniques for learning contextual embeddings (McCann et al., 2017; Peters et al., 2018; Radford et al., 2018; 2019; Devlin et al., 2019; Yang et al., 2019) provide ways to compute representations of tokens based on their surrounding context, and have shown significant accuracy improvements in downstream tasks, even with only a small number of task-specific parameters. Inspired by the success of pre-trained contextual embeddings for natural languages, we present the first attempt to apply the underlying techniques to source code. In particular, BERT (Devlin et al., 2019) produces a bidirectional Transformer encoder (Vaswani et al., 2017) by training it to predict values of masked tokens, and whether two sentences follow each other in a natural discourse. The pre-trained model can be fine-tuned for downstream supervised tasks and has been shown to produce state-of-the-art results on We fine-tune CuBERT on each of the classification tasks when fine-tuned on the variable-misuse localization and with the full datasets and many more epochs. CuBERT, 2019). Our contributions are as follows: state-of-the-art models (Hellendoorn et al., 2020; Vasic et al., localization+repair accuracies and outperforms published repair task, produces high classification, localization and it attains results competitive to the baseline models trained labeled data and with only 2 epochs, and that, even then, fine-tune effectively using only 33% of the task-specific lengths. In addition, we also show that CuBERT can be for training Word2Vec models, and by varying program across the classification tasks. We perform a number of 3 embeddings. Our results show that CuBERT consistently models from scratch and also using pre-trained Word2Vec as Transformers (Vaswani et al., 2017). We train the LSTM LSTM (Hochreiter & Schmidhuber, 1997) models, as well and compare the results to multi-layered bidirectional and its body, to predicting the right kind of exception to mismatch between a function’s natural language description, to prediction the right kind of exception to catch for a given code fragment. The localization and repair task, defined for variable-misuse bugs (Vasic et al., 2019), is a pointer-prediction task. Although similar tasks have appeared in prior work, the associated datasets come from different languages and varied sources; instead we create a cohesive multiple-task benchmark dataset in this work. To produce a high-quality dataset, we ensure that there is no overlap between pre-training and fine-tuning examples, and that all of the tasks are defined on Python code. For evaluating CuBERT, we create a benchmark of five classification tasks, and a sixth localization and repair task. The classification tasks range from classification of source code according to presence or absence of certain classes of bugs, to mismatch between a function’s natural language description and its body, to predicting the right kind of exception to catch for a given code fragment. The localization and repair task, defined for variable-misuse bugs (Vasic et al., 2019), is a pointer-prediction task. Although similar tasks have appeared in prior work, the associated datasets come from different languages and varied sources; instead we create a cohesive multiple-task benchmark dataset in this work. To produce a high-quality dataset, we ensure that there is no overlap between pre-training and fine-tuning examples, and that all of the tasks are defined on Python code. We fine-tune CuBERT on each of the classification tasks and compare the results to multi-layered bidirectional LSTM (Hochreiter & Schmidhuber, 1997) models, as well as Transformers (Vaswani et al., 2017). We train the LSTM models from scratch and also using pre-trained Word2Vec embeddings. Our results show that CuBERT consistently outperforms these baseline models by 3.2% to 14.7% across the classification tasks. We perform a number of additional studies by varying the sampling strategies used for training Word2Vec models, and by varying program lengths. In addition, we also show that CuBERT can be fine-tune effectively using only 33% of the task-specific labeled data and with only 2 epochs, and that, even then, it attains results competitive to the baseline models trained with the full datasets and many more epochs. CuBERT, when fine-tuned on the variable-misuse localization and repair task, produces high classification, localization and localization+repair accuracies and outperforms published state-of-the-art models (Hellendoorn et al., 2020; Vasic et al., 2019). Our contributions are as follows: - We present the first attempt at pre-training a BERT contextual embedding of source code. - We show the efficacy of the pre-trained contextual embedding on five classification tasks. Our fine-tuned models outperform baseline LSTM models (with/without Word2Vec embeddings), as well as Transformers trained from scratch, even with reduced training data. - We evaluate CuBERT on a pointer prediction task and show that it outperforms state-of-the-art results significantly. - We make the models and datasets publicly available.1 We hope that future work benefits from our contributions, by reusing our benchmark tasks, and by comparing against our strong baseline models. 2. Related Work Given the abundance of natural-language text, and the relative difficulty of obtaining labeled data, much effort has been devoted to using large corpora to learn about language in an unsupervised fashion, before trying to focus on tasks with small labeled training datasets. Word2Vec (Mikolov et al., 2013a;b) computed word embeddings based on word co-occurrence and proximity, but the same embedding is used regardless of the context. The continued advances in word (Pennington et al., 2014) and subword (Bojanowski et al., 2017) embeddings led to publicly released pre-trained embeddings, used in a variety of tasks. To deal with varying word context, contextual word embeddings were developed (McCann et al., 2017; Peters et al., 2018; Radford et al., 2018; 2019), in which an embedding is learned for the context of a word in a particular sentence, namely the sequence of words preceding it and possibly following it. BERT (Devlin et al., 2019) improved natural-language pre-training by using a de-noising autoencoder. Instead of learning a language model, which is inherently sequential, BERT optimizes for predicting a noised word within a sentence. Such prediction instances are generated by choosing a word position and either keeping it unchanged, removing the word, or replacing the word with a random wrong word. It also pre-trains with the objective of predicting whether two sentences can be next to each other. These pre-training objectives, along with the use of a Transformer-based architecture, gave BERT an accuracy boost in a number of NLP tasks over the state-of-the-art. BERT has been improved upon in various ways, including modifying training objectives, utilizing ensembles, combining attention with autoregression (Yang et al., 2019), and expanding pre-training corpora and time (Liu et al., 2019). However, the main architecture of BERT seems to hold up as the state-of-the-art, as of this writing. 1https://github.com/google-research/google-research/tree/master/cubert In the space of programming languages, embeddings have been learned for specific software-engineering tasks (Chen & Monperrus, 2019). These include embeddings of variable and method identifiers using local and global context (Allamanis et al., 2015), abstract syntax trees (ASTs) (Mou et al., 2016; Zhang et al., 2019), AST paths (Alon et al., 2019), memory heap graphs (Li et al., 2016), and ASTs enriched with data-flow information (Allamanis et al., 2018; Hellendoorn et al., 2020). These approaches require analyzing source code beyond simple tokenization. In this work, we derive a pre-trained contextual embedding of tokenized source code without explicitly modeling source-code-specific information, and show that the resulting embedding can be effectively fine-tuned for downstream tasks. CodeBERT (Feng et al., 2020) targets paired natural-language (NL) and multi-lingual programming-language (PL) tasks, such as code search and generation of code documentation. It pre-trains a Transformer encoder by treating a natural-language description of a function and its body as separate sentences in the sentence-pair representation of BERT. We also handle natural language directly, but do not require such a separation. Natural-language tokens can be mixed with source-code tokens both within and across sentences in our encoding. One of our benchmark tasks, function-docstring mismatch, illustrates the ability of CodeBERT to handle NL-PL tasks. 3. Experimental Setup We now outline our benchmarks and experimental study. The supplementary material contains deeper detail aimed at reproducing our results. 3.1. Code Corpus for Fine-Tuning Tasks We use the ETH Py150 corpus (Raychev et al., 2016) to generate datasets for the fine-tuning tasks. This corpus consists of 150K Python files from GitHub, and is partitioned into a training split (100K files) and a test split (50K files). We held out 10K files from the training split as a validation split. We deduplicated the dataset in the fashion of Allamanis et al. (2018). In particular, two files are considered similar to each other if the Jaccard similarity between the sets of tokens (identifiers and string literals) is above 0.8 and in addition, it is above 0.7 for multi-sets of tokens. This brought the dataset to 14.3 million files. We then further deduplicated the remaining files, by clustering them into equivalence classes holding similar files according to the same similarity metric, and keeping only one exemplar per equivalence class. This helps avoid biasing the pre-trained embedding. Finally, we removed files that could not be parsed. In the end, we were left with 7.4 million Python files containing over 9.3 billion tokens. This is our Python pre-training code corpus. 3.2. The GitHub Python Pre-Training Code Corpus We used the public GitHub repository hosted on Google’s BigQuery platform (the github_repos dataset under BigQuery’s public-data project, bigquery-public-data). We extracted all files ending in .py, under open-source, redistributable licenses, removed symbolic links, and retained only files reported to be in the refs/heads/master branch. This resulted in about 16.2 million files. To avoid duplication between pre-training and fine-tuning data, we removed files that had high similarity to the files in the ETH Py150 Open corpus, using the method of Allamanis (2018). In particular, two files are considered similar to each other if the Jaccard similarity between the sets of tokens (identifiers and string literals) is above 0.8 and in addition, it is above 0.7 for multi-sets of tokens. This brought the dataset to 14.3 million files. We then further deduplicated the remaining files, by clustering them into equivalence classes holding similar files according to the same similarity metric, and keeping only one exemplar per equivalence class. This helps avoid biasing the pre-trained embedding. Finally, we removed files that could not be parsed. In the end, we were left with 7.4 million Python files containing over 9.3 billion tokens. This is our Python pre-training code corpus. 3.3. Source-Code Modeling We first tokenize a Python program using the standard Python tokenizer (the tokenize package). We leave language keywords intact and produce special tokens for syntactic elements that have either no string representation (e.g., DEDENT tokens, which occur when a nested program scope concludes), or ambiguous interpretation (e.g., new-line characters inside string literals, at the logical end of a Python statement, or in the middle of a Python statement result in distinct special tokens). We split identifiers according to common heuristic rules (e.g., snake or Camel case). Finally, we split string literals using heuristic rules, on white-space characters, and on special characters. We limit all thus produced tokens to a maximum length of 15 characters. We call this the program vocabulary. Our Python pre-training code corpus contained 16 million unique tokens. We greedily compress the program vocabulary into a subword vocabulary (Schuster & Nakajima, 2012) using the SubwordTextEncoder from the Tensor2Tensor project (Vaswani et al., 2018), resulting in about 50K tokens. All words in the program vocabulary can be losslessly encoded using one or more of the subword tokens. We tokenize programs first into program tokens, as described above, and then encode those tokens one by one in the subword vocabulary. The objective of this encoding scheme is to preserve syntactically meaningful boundaries of tokens. For example, the identifier “snake_case” --- 2https://github.com/google-research-datasets/eth_py150_open 3https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/text_encoder.py could be encoded as “snake_case”, preserving the snake case split of its characters, even if the subtoken “e_c” were very popular in the corpus; the latter encoding might result in a smaller representation but would lose the intent of the programmer in using a snake-case identifier. Similarly, “i=0” may be very frequent in the corpus, but we still force it to be encoded as separate tokens i, =, and 0, ensuring that we preserve the distinction between operators and operands. Both the BERT model and the Word2Vec embeddings are built on the subword vocabulary. 3.4. Fine-Tuning Tasks To evaluate CuBERT, we design five classification tasks and a multi-headed pointer task. These are motivated by prior work, but unfortunately, the associated datasets come from different languages and varied sources. We want the tasks to be on Python code, and for accurate results, we ensure that there is no overlap between pre-training and fine-tuning datasets. We therefore create all the tasks on the ETH Py150 Open corpus (see Section 3.1). As discussed in Section 3.2, we ensure that there is no duplication between this and the pre-training corpus. We hope that our datasets for these tasks will be useful to others as well. The fine-tuning tasks are described below. A more detailed discussion is presented in the supplementary material. **Variable-Misuse Classification** Allamanis et al. (2018) observed that developers may mistakenly use an incorrect variable in the place of a correct one. These mistakes may occur when developers copy-paste similar code but forget to rename all occurrences of variables from the original fragment, or when there are similar variable names that can be confused with each other. These can be subtle errors that remain undetected during compilation. The task by Allamanis et al. (2018) is to choose the correct variable name at a location within a C# function. We take the classification version restated by Vasic et al. (2019), wherein, given a function, the task is to predict whether there is a variable misuse at any location in the function, without specifying a particular location to consider. Here, the classifier has to consider all variables and their usages to make the decision. In order to create negative (buggy) examples, we replace a variable use at some location with another variable that is defined within the function. **Wrong Binary Operator** Pradel & Sen (2018) propose the task of detecting whether a binary operator in a given expression is correct. They use features extracted from limited surrounding context. We use the entire function with the goal of detecting whether any binary operator in the function is incorrect. The negative examples are created by randomly replacing some binary operator with another type-compatible operator. **Swapped Operand** Pradel & Sen (2018) propose the wrong binary operand task where a variable or constant is used incorrectly in an expression, but that task is quite similar to the variable-misuse task we already use. We therefore define another class of operand errors where the operands of non-commutative binary operators are swapped. The operands can be arbitrary subexpressions, and are not restricted to be just variables or constants. To simplify example generation, we restrict this task to examples in which the operator and operands all fit within a single line. **Function-Docstring Mismatch** Developers are encouraged to write descriptive docstrings to explain the functionality and usage of functions. This provides parallel corpora between code and natural language sentences that have been used for machine translation (Barone & Sennrich, 2017), detecting uninformative docstrings (Louis et al., 2018) and to evaluate their utility to provide supervision in neural code search (Cambronero et al., 2019). We prepare a sentence-pair classification problem where the function and its docstring form two distinct sentences. The positive examples come from the correct function-docstring pairs. We create negative examples by replacing correct docstrings with docstrings of other functions, randomly chosen from the dataset. For this task, the existing docstring is removed from the function body. **Exception Type** While it is possible to write generic exception handlers (e.g., “except Exception” in Python), it is considered a good coding practice to catch and handle the precise exceptions that can be raised by a code fragment.⁴ We identified the 20 most common exception types from the GitHub dataset, excluding the catch-all Exception (full list in Table 1 in the supplementary material). Given a function with an except clause for one of these exception types, we replace the exception with a special “hole” token. The task is the multi-class classification problem of predicting the original exception type. **Variable-Misuse Localization and Repair** As an instance of a non-classification task, we consider the joint classification, localization, and repair version of the variable-misuse task from Vasic et al. (2019). Given a function, the task is to predict one pointer (called the localization pointer) to identify a variable-misuse location, and another pointer (called the repair pointer) to identify a variable from the same function that is the right one to use at the faulty location. The model is also trained to classify functions that do not contain any variable misuse as bug-free by making the localization pointer point to a special location in the function. We create negative examples using the same method. --- ⁴https://google.github.io/styleguide/pyguide.html#24-exceptions Table 1 lists the sizes of the resulting benchmark datasets extracted from the fine-tuning corpus. The Exception Type task contains significantly fewer examples than the other tasks, since examples for this task only come from functions that catch one of the chosen 20 exception types. ### 3.5. BERT for Source Code The BERT model (Devlin et al., 2019) consists of a multi-layered Transformer encoder. It is trained with two tasks: (1) to predict the correct tokens in a fraction of all positions, some of which have been replaced with incorrect tokens or the special \texttt{[MASK]} token (the Masked Language Model task, or \texttt{MLM}) and (2) to predict whether the two sentences separated by the special \texttt{[SEP]} token follow each other in some natural discourse (the Next-Sentence Prediction task, or \texttt{NSP}). Thus, each example consists of one or two sentences, where a sentence is the concatenation of contiguous lines from the source corpus, sized to fit the target example length. To ensure that every sentence is treated in multiple instances of both MLM and NSP, BERT by default duplicates the corpus 10 times, and generates independently derived examples from each duplicate. With 50% probability, the second example sentence comes from a random document (for NSP). A token is chosen at random for an MLM prediction (up to 20 per example), and from those chosen, 80% are masked, 10% are left undisturbed, and 10% are replaced with a random token. CuBERT is similarly formulated, but a CuBERT line is a logical code line, as defined by the Python standard. Intuitively, a logical code line is the shortest sequence of consecutive lines that constitutes a legal statement, e.g., it has correctly matching parentheses. We count example lengths by counting the subword tokens of both sentences (see Section 3.3). We train the BERT Large model having 24 layers with 16 attention heads and 1024 hidden units. Sentences are created from our pre-training dataset. Task-specific classifiers pass the embedding of a special start-of-example \texttt{[CLS]} token through feed-forward and softmax layers. For the pointer prediction task, the pointers are computed exactly as used in the Variable-Misuse Classification task. Table 1 lists the sizes of the resulting benchmark datasets. Note that for validation, we have subsampled the original datasets (in parentheses) down to 8,192 examples, except for exception classification, which only had 2,088 validation examples, all of which are included. ### 4. Experimental Results #### 4.1. Training Details CuBERT’s dataset generation duplicates the corpus 10 times, whereas Word2Vec is trained without duplication. To compensate for this difference, we trained Word2Vec for 10 <table> <thead> <tr> <th>Task</th> <th>Train</th> <th>Validation</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Variable-Misuse Classification</td> <td>700,708</td> <td>8,192 (75,478)</td> <td>378,440</td> </tr> <tr> <td>Wrong Binary Operator</td> <td>459,400</td> <td>8,192 (49,804)</td> <td>251,804</td> </tr> <tr> <td>Swapped Operand</td> <td>236,246</td> <td>8,192 (26,118)</td> <td>130,972</td> </tr> <tr> <td>Function-Docstring</td> <td>340,846</td> <td>8,192 (37,592)</td> <td>186,698</td> </tr> <tr> <td>Exception Type</td> <td>18,480</td> <td>2,088 (2,088)</td> <td>10,348</td> </tr> <tr> <td>Variable-Misuse Localization and Repair</td> <td>700,708</td> <td>8,192 (75,478)</td> <td>378,440</td> </tr> </tbody> </table> **Table 1.** Benchmark fine-tuning datasets. Note that for validation, we have subsampled the original datasets (in parentheses) down to 8,192 examples, except for exception classification, which only had 2,088 validation examples, all of which are included. #### 3.6. Baselines ##### 3.6.1. Word2Vec We train Word2Vec models using the same pre-training corpus as the BERT model. To maintain parity, we generate the dataset for Word2Vec using the same pipeline as BERT but by disabling masking and generation of negative examples for NSP. The dataset is generated without any duplication. We train both CBOW and Skipgram models using GenSim (Rehůřek & Sojka, 2010). To deal with the large vocabulary, we use negative sampling and hierarchical softmax (Mikolov et al., 2013a;b) to train the two versions. In all, we obtain four types of Word2Vec embeddings. ##### 3.6.2. Bidirectional LSTM and Transformer In order to obtain context-sensitive encodings of input sequences for the fine-tuning tasks, we use multi-layered bidirectional LSTMs (Hochreiter & Schmidhuber, 1997) (BiLSTMs). These are initialized with the pre-trained Word2Vec embeddings. To further evaluate whether LSTMs alone are sufficient without pre-training, we also train BiLSTMs with an embedding matrix that is initialized from scratch with Xavier initialization (Glorot & Bengio, 2010). We also trained Transformer models (Vaswani et al., 2017) for our fine-tuning tasks. We used BERT’s own Transformer implementation, to ensure comparability of results. For comparison with prior work, we use the unidirectional LSTM and pointer model from Vasic et al. (2019) for the Variable-Misuse Localization and Repair task. epochs and CuBERT for 1 epoch. We chose models by validation accuracy, both during hyperparameter searches, and during model selection within an experiment. We pre-train CuBERT with the default configuration of the BERT Large model, one model per example length (128, 256, 512, and 1,024 subword tokens) with batch sizes of 8,192, 4,096, 2,048, and 1,024 respectively, and the default BERT learning rate of $1 \times 10^{-4}$. Fine-tuned models also used the same batch sizes as for pre-training, and BERT’s default learning rate ($5 \times 10^{-5}$). For both, we gradually warm up the learning rate for the first 10% of examples, which is BERT’s default value. For Word2Vec, when training with negative samples, we choose 5 negative samples. The embedding size for all the Word2Vec pre-trained models is set at 1,024. For the baseline BiLSTM models, we performed a hyperparameter search on each task and pre-training configuration separately (5 tasks, each trained with the four Word2Vec embeddings, plus the randomly initialized embeddings), for the 512 example length. For each of these 25 task configurations, we varied the number of layers (1 to 3), the number of hidden units (128, 256 and 512), the LSTM output dropout probability (0.1 and 0.5), and the learning rate ($1 \times 10^{-3}$, $1 \times 10^{-4}$ and $1 \times 10^{-5}$). We used the Adam (Kingma & Ba, 2014) optimizer throughout, and batch size 8,192 for all tasks except the Exception-Type task, for which we used batch size 64. Invariably, the best hyperparameter selection had 512 hidden units per layer and learning rate of $1 \times 10^{-3}$, but the number of layers (mostly 2 or 3) and dropout probability varied across best task configurations. Though no single Word2Vec configuration is the best, CBOW trained with negative sampling gives the most consistent results overall. For the baseline Transformer models, we originally attempted to train a model of the same configuration as CuBERT. However, the sizes of our fine-tuning datasets seemed too small to train such a large Transformer. Instead, we performed a hyperparameter search for each task individually, for the 512 example length. We varied the number of transformer layers (1 to 6), hidden units (128, 256 and 512), learning rates ($1 \times 10^{-3}$, $5 \times 10^{-4}$, $1 \times 10^{-4}$, $5 \times 10^{-5}$ and $1 \times 10^{-5}$) and batch sizes (512, 1,024, 2,048 and 4,096). The best architecture varied across the tasks: for example, 5 layers with 128 hidden and the highest learning rate worked best for the Function-Docstring task, whereas for the Exception-Type task, 2 layers, 512 hidden, and the second lowest learning rate worked best. Finally, for our baseline pointer model (referred to as LSTM+pointer below) we searched over the following hyperparameter choices: hidden sizes of 512 and 1,024, token embedding sizes of 512 and 1,024, and learning rates of $1 \times 10^{-1}$, $1 \times 10^{-2}$ and $1 \times 10^{-3}$. We used the Adam optimizer, a batch size of 256, and example length 512. In contrast to the original work (Vasic et al., 2019), we generated one pair of buggy/bug-free examples per function (rather than one per variable use, per function, which would bias towards longer functions), and use CuBERT’s subword-tokenized vocabulary of 50K subtokens (rather than a limited full-token vocabulary, which leaves many tokens out of vocabulary). We used TPUs for training our models, except for pre-training Word2Vec embeddings, and the pointer model by Vasic et al. (2019). For the rest, and for all evaluations, we used P100 or V100 GPUs. All experiments using pre-trained word or contextual embeddings continued to fine-tune weights throughout training. ### 4.2. Research Questions We set out to answer the following research questions. We will address each with our results. 1. **Do contextual embeddings help with source-code analysis tasks, when pre-trained on an unlabeled code corpus?** We compare CuBERT to BiLSTM models with and without pre-trained Word2Vec embeddings on the classification tasks (Section 4.3). 2. **Does fine-tuning actually help, or is the Transformer model by itself sufficient?** We compare fine-tuned CuBERT models to Transformer-based models trained from scratch on the classification tasks (Section 4.4). 3. **How does the performance of CuBERT on the classification tasks scale with the amount of labeled training data?** We compare the performance of fine-tuned CuBERT models when fine-tuning with 33%, 66% and 100% of the task training data (Section 4.5). 4. **How does context size affect CuBERT?** We compare fine-tuning performance for different example lengths on the classification tasks (Section 4.6). 5. **How does CuBERT perform on complex tasks, against state-of-the-art methods?** We implemented and fine-tuned a model for a multi-headed pointer prediction task, namely, the Variable-Misuse Localization and Repair task (Section 4.7). We compare it to the models from (Vasic et al., 2019) and (Hellendoorn et al., 2020). Except for Section 4.6, all the results are presented for sequences of length 512. We give examples of classification instances in the supplementary material and include visualizations of attention weights for them. 4.3. Contextual vs. Word Embeddings The purpose of this analysis is to understand how much pre-trained contextual embeddings help, compared to word embeddings. For each classification task, we trained BiLSTM models starting with each of the Word2Vec embeddings, namely, continuous bag of words (CBOW) and Skipgram trained with negative sampling or hierarchical softmax. We trained the BiLSTM models for 100 epochs and the CuBERT models for 20 epochs, and all models stopped improving by the end. The resulting test accuracies are shown in Table 2 (first 5 rows and next-to-last row). CuBERT consistently outperforms BiLSTM (with the best task-wise Word2Vec configuration) on all tasks, by a margin of 3.2\% to 14.7\%. Thus, the pre-trained contextual embedding provides superior results even with a smaller budget of 20 epochs, compared to the 100 epochs used for BiLSTMs. The Exception-Type classification task has an order of magnitude less training data than the other tasks (see Table 1). The difference between the performance of BiLSTM and CuBERT is substantially higher for this task. Thus, fine-tuning is of much value for tasks with limited labeled training data. We analyzed the performance of CuBERT with the reduced fine-tuning budget of only 2 and 10 epochs (see the remaining rows of the CuBERT section in Table 2). Except for the Exception Type task, CuBERT outperforms the best 100-epoch BiLSTM within 2 fine-tuning epochs. On the Exception-Type task, CuBERT with 2 fine-tuning epochs outperforms all but two configurations of the BiLSTM baseline. This shows that, even when restricted to just a few fine-tuning epochs, CuBERT can reach accuracies that are comparable to or better than those of BiLSTMs trained with Word2Vec embeddings. To sanity-check our findings about BiLSTMs, we also trained the BiLSTM models from scratch, without pre-trained embeddings. The results are shown in the first row of Table 2. Compared to those, the use of Word2Vec embeddings performs better by a margin of 2.7\% to 14.2\%. 4.4. Is Transformer All You Need? One may wonder if CuBERT’s promising results derive more from using a Transformer-based model for its classification tasks, and less from the actual, unsupervised pre-training. Here we compare our results on the classification tasks to a Transformer-based model trained from scratch, i.e., without the benefit of a pre-trained embedding. As discussed in Section 4.1, the size of the training data limited us to try out Transformers that were substantially smaller than the CuBERT model (BERT Large architecture). All the Transformer models were trained for 100 epochs during which their performance stopped improving. We selected the best model within the chosen hyperparameters for each task based on best validation accuracy. As seen from the last row of Table 2, the performance of CuBERT is substantially higher than the Transformer models trained from scratch. Thus, for the same choice of architecture (i.e., Transformer) pre-training seems to help by enabling training of a larger and better model. 4.5. The Effects of Little Supervision The big draw of unsupervised pre-training followed by fine-tuning is that some tasks have small labeled datasets. We study here how CuBERT fares with reduced training data. We sampled uniformly the fine-tuning dataset to 33\% and 66\% of its size, and produced corresponding training datasets for each classification task. We then fine-tuned the pre-trained CuBERT model with each of the 3 different training splits. Validation and testing were done with the same original datasets. Table 3 shows the results. The Function Docstring task seems robust to the reduction of the training dataset, both early and late in the fine-tuning process (that is, within 2 vs. 20 epochs), whereas the Exception Classification task is heavily impacted by the dataset reduction, given that it has relatively few training examples to begin with. Interestingly enough, for some tasks, even fine-tuning for only 2 epochs and only using a third of the training data outperforms the baselines. For example, for Variable Misuse and Function Docstring, CuBERT at 2 epochs and 33% of training data substantially outperforms the BiLSTM with Word2Vec and the Transformer baselines. 4.6. The Effects of Context Context size is especially useful in code tasks, given that some relevant information may lie many “sentences” away from its locus of interest. Here we study how reducing the context length (i.e., the length of the examples used to pre-train and fine-tune) affects performance. We produce data with shorter example lengths, by first pre-training a model on a given example length, and then fine-tuning that model on the corresponding task with examples of that same example length.⁵ Table 4 shows the results. Although context seems to be important to most tasks, the Function Docstring task paradoxically improves with less context. This may be because the task primarily depends on ⁵Note that we did not attempt to, say, pre-train on length 1,024 and then fine-tune that model on length 256-examples, which may also be a practical scenario. For comparison, we also evaluated the BiLSTM model on varying example lengths for the Variable-Misuse task with CBOW and negative sampling (last column of Table 4). More context does seem to benefit the BiLSTM Variable-Misuse classifier as well. However, the improvement offered by CuBERT with increasing context is significantly greater. 4.7. Evaluation on a Multi-Headed Pointer Task We now discuss the results of fine-tuning CuBERT to predict the localization and repair pointers for the variable-misuse task. For this task, we implement the multi-headed pointer model from Vasic et al. (2019) on top of CuBERT. The baseline consists of the same pointer model on a unidirectional LSTM as used by Vasic et al. (2019). We refer to these models as CuBERT+pointer and LSTM+pointer, respectively. Due to limitations of space, we omit the details of the pointer model and refer the reader to the above paper. However, the two implementations are identical above the sequence encoding layer; the difference is the BERT encoder versus an LSTM encoder. As reported in Section 4 of that work, to enable comparison with an enumerative approach, the evaluation was performed only on 12K test examples. Instead, here we report the numbers on all 378K of our test examples for both models. We trained the baseline model for 100 epochs and fine-tuned Table 5. Variable-misuse localization and repair task. Comparison of the LSTM+pointer model (Vasic et al., 2019) to our fine-tuned CuBERT+pointer model. We also show results on the test data by Hellendoorn et al. (2020) computed by us and reported by the authors in their Table 1. In the Test Data column, C means our CuBERT test dataset, and H means the test dataset used by Hellendoorn et al. (2020). CuBERT for 2, 10, and 20 epochs. Table 5 gives the results along the same metrics as Vasic et al. (2019). The metrics are defined as follows: 1) True Positive is the percentage of bug-free functions classified as bug-free. 2) Classification Accuracy is the percentage of correctly classified examples (between bug-free and buggy). 3) Localization Accuracy is the percentage of buggy examples for which the localization pointer correctly identifies the bug location. 4) Localization+Repair Accuracy is the percentage of buggy examples for which both the localization and repair pointers make correct predictions. As seen from Table 5 (top 4 rows), CuBERT+pointer outperforms LSTM+pointer consistently across all the metrics, and even within 2 and 10 epochs. More recently, Hellendoorn et al. (2020) evaluated hybrid models for the same task, combining graph neural networks, Transformers, and RNNs, and greatly improving prior results. To compare, we obtained the same test dataset from the authors, and evaluated our CuBERT fine-tuned model on it. The last four rows of Table 5 show our results and the results reported in that work. Interestingly, the models by Hellendoorn et al. (2020) make use of richer input representations, including syntax, data flow, and control flow. Nevertheless, CuBERT outperforms them while using only a lexical representation of the input program. 5. Conclusions and Future Work We present the first attempt at pre-trained contextual embedding of source code by training a BERT model, called CuBERT, which we fine-tuned on five classification tasks, and compared against BiLSTM with Word2Vec embeddings and Transformer models. As a more challenging task, we also evaluated CuBERT on a multi-headed pointer prediction task. CuBERT outperformed the baseline models consistently. We evaluated CuBERT with less data and fewer epochs, highlighting the benefits of pre-training on a massive code corpus. We use only source-code tokens and leave it to the underlying Transformer model to infer any structural interactions between them through self-attention. Prior work (Allamanis et al., 2018; Hellendoorn et al., 2020) has argued for explicitly using structural program information (e.g., control flow and data flow). It is an interesting avenue of future work to incorporate such information in pre-training using relation-aware Transformers (Shaw et al., 2018). However, our improved results in comparison to Hellendoorn et al. (2020) show that CuBERT is a simple yet powerful technique and provides a strong baseline for future work on source-code representations. While surpassing the accuracies achieved by CuBERT with newer models and pre-training/fine-tuning methods would be a natural extension to this work, we also envision other follow-up work. There is increasing interest in developing pre-training methods that can produce smaller models more efficiently and that trade-off accuracy for reduced model size. Further, our benchmark could be valuable to techniques that explore other program representations (e.g., trees and graphs), in multi-task learning, and to develop related tasks such as program synthesis. Acknowledgements We are indebted to Daniel Tarlow for his guidance and generous advice throughout the development of this work. Our work has also improved thanks to feedback, use cases, helpful libraries, and proofs of concept offered by David Bieber, Vincent Hellendoorn, Ben Lerner, Hyoontae Lim, Rishabh Singh, Charles Sutton, and Manushree Vijayvergiya. Finally, we are grateful to the anonymous reviewers, who gave useful, constructive comments and helped us improve our presentation and results. References Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: Learning and Evaluating Contextual Embedding of Source Code Learning and Evaluating Contextual Embedding of Source Code
{"Source-Url": "http://proceedings.mlr.press/v119/kanade20a/kanade20a.pdf", "len_cl100k_base": 9541, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 44049, "total-output-tokens": 14317, "length": "2e13", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0003368854522705078, "__label__crime_law": 0.0002818107604980469, "__label__education_jobs": 0.0006575584411621094, "__label__entertainment": 5.817413330078125e-05, "__label__fashion_beauty": 0.0001500844955444336, "__label__finance_business": 0.0001608133316040039, "__label__food_dining": 0.0002446174621582031, "__label__games": 0.0003883838653564453, "__label__hardware": 0.0006346702575683594, "__label__health": 0.0003046989440917969, "__label__history": 0.0001302957534790039, "__label__home_hobbies": 8.386373519897461e-05, "__label__industrial": 0.00024259090423583984, "__label__literature": 0.0002161264419555664, "__label__politics": 0.0001691579818725586, "__label__religion": 0.0003104209899902344, "__label__science_tech": 0.0082855224609375, "__label__social_life": 8.26716423034668e-05, "__label__software": 0.00576019287109375, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00021982192993164065, "__label__transportation": 0.00035500526428222656, "__label__travel": 0.00015032291412353516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53236, 0.05667]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53236, 0.32996]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53236, 0.85878]], "google_gemma-3-12b-it_contains_pii": [[0, 4038, false], [4038, 10428, null], [10428, 16182, null], [16182, 21813, null], [21813, 26866, null], [26866, 32132, null], [32132, 35815, null], [35815, 38621, null], [38621, 42686, null], [42686, 47224, null], [47224, 51634, null], [51634, 53236, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4038, true], [4038, 10428, null], [10428, 16182, null], [16182, 21813, null], [21813, 26866, null], [26866, 32132, null], [32132, 35815, null], [35815, 38621, null], [38621, 42686, null], [42686, 47224, null], [47224, 51634, null], [51634, 53236, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53236, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53236, null]], "pdf_page_numbers": [[0, 4038, 1], [4038, 10428, 2], [10428, 16182, 3], [16182, 21813, 4], [21813, 26866, 5], [26866, 32132, 6], [32132, 35815, 7], [35815, 38621, 8], [38621, 42686, 9], [42686, 47224, 10], [47224, 51634, 11], [51634, 53236, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53236, 0.05031]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
c6a54b71622440267bee0407b3cf952519588d30
B. Steffen C. Barry Jay M. Mendler Compositional characterization of observable program properties Informatique théorique et applications, tome 26, no 5 (1992), p. 403-424 <http://www.numdam.org/item?id=ITA_1992__26_5_403_0> © AFCET, 1992, tous droits réservés. L’accès aux archives de la revue « Informatique théorique et applications » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. COMPOSITIONAL CHARACTERIZATION OF OBSERVABLE PROGRAM PROPERTIES (*) by B. Steffen (1), C. Barry Jay (2) and M. Mendler (3) Communicated by G. Longo Abstract. — In this paper we model both program behaviours and abstractions between them as lax functors, which generalize abstract interpretations by exploiting the natural ordering of program properties. This generalization provides a framework in which correctness (safety) and completeness of abstract interpretations naturally arise from this order. Furthermore, it supports modular and stepwise refinement: given a program behaviour, its characterization, which is a "best" correct and complete denotational semantics for it, can be determined in a compositional way. Résumé. — Dans cet article nous modélisons à la fois les comportements des programmes et les abstractions entre eux comme des fonctions qui généralisent les interprétations abstraites en tirant profit de l'ordre naturel des propriétés des programmes. Cette généralisation offre un cadre dans lequel la correction (sûreté) et la complétude des interprétations abstraites résultent naturellement de cet ordre. De plus, elle autorise le raffinement modulaire et pas à pas : étant donné le comportement d'un programme, sa caractérisation, qui est une sémantique dénotationnelle complète et aussi correcte que possible, peut être déterminée par composition. 1. INTRODUCTION Abstract interpretation is a method for analyzing program behaviours, i.e. the relationship between programs and their observable properties [CC77a, CC77b, Nie86, AH87, JN90]. It abstracts from standard (denotational) semantics for programming languages to non-standard semantics, which are intended to retain correct (safe), but not necessarily complete, information about given properties of interest. This intention is hard to specify without a precise notion of behaviour, which, despite its primacy, was missing in the framework of abstract interpretation. (*) Received June 1991, revised August 1991. (1) University of Aarhus, Denmark. (2) LFCS, University of Edinburgh, Scotland. (3) Institute for Computer Aided Circuit Design, University of Erlangen, Germany. In this paper, the notion of behaviour is defined formally as a simple generalization of abstract interpretation, in which operations (specifically, sequential composition) are preserved up to a notion of inequality, which, intuitively, expresses precision of information. It can then be used to specify the properties of programs, which must be respected, both by abstract interpretations and the abstractions between them. This notion of behaviour is not restricted to programming languages, nor need it be derived from a standard denotational semantics. For example, abstractions between semantics can also be viewed as behaviours in our framework, so preserving their direction and composition. This contrasts with logical relations [Plo80], which are symmetric and do not compose, counter to intuition [MJ86]. Moreover, this precision ordering on properties defines a partial order on behaviours so that correctness and completeness of one behaviour for another arise naturally. By treating abstract interpretations as behaviours this provides an intuitive and simple notion of correctness and completeness of one abstract interpretation for another, generalizing the approach using correctness correspondences [JN90, MJ86], which aside from being complicated, yields a non-transitive notion of correctness. Unlike denotational semantics or abstract interpretations, behaviours are not, in general, compositional. However, compositionality can be systematically recovered by applying the characterization functor, which maps a behaviour to the abstract interpretation that identifies those programs which behave identically in any context. This construction preserves simultaneous observation and stepwise construction of behaviours and therefore permits the hierarchical development of abstract interpretations from behavioural specifications. The development of this paper is based on the categorical framework for two reasons. First, it provides a very general and well-developed mathematical background for computer science in general, and typed programming languages in particular. Second, the inequalities which are central to our concept of behaviour have been studied extensively as lax functors [KS74]. However, neither of these reasons for using categories is imperative, as the point is that behaviours preserve operations up to inequality. This is equally meaningful for untyped languages, where the programs form a set equipped with some operations, and our behaviours are a form of “weak” homomorphism. Altogether, the paper is structured as follows. After sketching our model in Section 2, we develop our notion of behaviour in Section 3. We introduce simulation relations in Section 3.1 in order to motivate the subsequent development, where behaviours are defined as lax functors (Section 3.3) between ordered categories (Section 3.2). Subsequently, we define the dual notions of correctness and completeness of one behaviour (abstract interpretation) wrt another in Section 3.4. Section 4 presents (Section 4.1) and illustrates (Section 4.2) the main result of this paper, as well as two corollaries, which establish the modularity and functoriality of our framework (Section 4.3). Finally, Sections 5 and 6 mention conclusions and directions for future work. 2. THE MODEL Our model consists of ordered categories (similar to O-categories [SP82]), with behaviours corresponding to morphisms between them. It can be sketched by means of the following diagram: \[ \begin{array}{c} \mathcal{L} \\ \downarrow D \\ B \\ \downarrow D \\ \mathcal{O} \end{array} \] \[D\] is a category which we identify with a programming language: its objects are types and its morphisms are programs. Denotational semantics and abstract interpretations \(D : \mathcal{L} \to \mathcal{D}\) are both structure-preserving functors (into, say, a category of domains). For the purposes of this exposition, we consider the simplest case, where the only structure of \(\mathcal{L}\) is composition. \(\mathcal{O}\) is an ordered category of observations or properties, \textit{i.e.} its morphisms are ordered in a way compatible with composition, with smaller morphisms representing stronger properties. For example, for strictness analysis one usually considers \(\mathcal{O} = \Omega\) (cf. Example 3.5-3) which has one object and two morphisms \(\bot\) (reflecting strictness wrt the parameter under consideration) and \(\top\) (reflecting that no information could be inferred) satisfying \(\bot \leq \top\). A behaviour \(B : \mathcal{L} \rightleftharpoons \mathcal{O}\) is an assignment of properties to programs which is weakly functorial or compositional, \textit{i.e.} is a \textit{lax functor} (Section 3.3). For example, the strictness of a composite program \(f; g\) cannot be inferred from the strictness of its components \(f\) and \(g\). Rather, we have for strictness vol. 26, n° 5, 1992 behaviour $B$ $$B(f; g) \leq B(f; Bg)$$ which allows us to infer correct, but incomplete information about $f; g$ from the behaviours of $f$ and $g$, e.g. if $f$ and $g$ are both strict then so is $f; g$, but otherwise no information can be deduced. Let now $B' : \mathcal{D} \rightarrow \mathcal{C}$ be a given behaviour (lax functor) for the semantics $D$ (e.g. strictness for continuous functions between domains). Then $D$ is correct for $B$ if $$B \leq D; B'$$ Completeness is exactly dual, i.e. $D$ is complete for $B$ if $D; B' \leq B$. Thus, $D$ is correct and complete for $B$ if $D; B' = B$, as indicated by the diagram above. The Characterization Theorem 4.6 states that every behaviour $B$ has a "best" correct and complete abstract interpretation $\mathcal{Q}B$ which is its characterization. More precisely, we factorize the lax functor $B$ as a functor $\mathcal{Q}B$ followed by a lax functor $\varepsilon_B$. Data types are preserved by $\mathcal{Q}B$, i.e. it is injective on objects, and it is computationally relevant, i.e. surjective on morphisms. Its effect is to identify those programs which have the same behaviour in any context. That $\mathcal{Q}B$ is the "best" possible such abstract interpretation refers to the following universal property: Let $D$ be another abstract interpretation for $B$ which is correct and complete, datatype preserving, and computationally relevant (Section 4.1). Then $\mathcal{Q}B$ factors through $D$ in a unique way. Behaviours may have structure themselves: they may either represent the simultaneous observation of some more primitive behaviours, or they may Informatique théorique et Applications/Theoretical Informatics and Applications be constructed by stepwise abstraction. In fact, this structure is preserved by the characterization functor $\mathcal{U}$, as can be easily derived from the Characterization Theorem 4.6: First, $\mathcal{U}$ is modular, i.e. the characterization of a behaviour, which is the simultaneous observation of a pair of behaviours $B_1$ and $B_2$, is obtained from their characterizations using categorical products. Second, $\mathcal{U}$ is functorial. Hence, the characterization of the stepwise abstraction $B_1; B_2$ along two behaviours factors through the characterization of $B_1$.$^1$ Thus correct and complete abstract interpretations can be constructed hierarchically along the structure of their behavioural specifications which is reminiscent of the well-known paradigm of software development. **Related Approaches** [CC79, Ste87, Ste89] are concerned with the systematic development of abstract interpretations for imperative languages. Cousot/Cousot consider only the phenomenon of simultaneous observation. Moreover, they do not aim to obtain an abstract interpretation which satisfies a specific behaviour. Rather, they consider a given abstraction function, and try to mimic the complete semantics (static semantics) on the corresponding codomain as precise as possible. In contrast, like this paper, [Ste87, Ste89] are concerned with developing an abstract interpretation that satisfies a given program behaviour, or in their terminology which cannot be distinguished from its specification on a given level of observation. Whereas [Ste87] only deals with functoriality, [Ste89] also considers modularity. The categorical approach presented here generalizes and simplifies these approaches. **3. BEHAVIOURS OF PROGRAMS** A programming language is represented by a category $\mathcal{L}$ in our setting; the types of the language are its objects (or if untyped then it has a single $^1$ This is particularly useful for data flow analysis since one can successively abstract from certain program properties, until the universal model $\mathcal{U}$ is decidable. Of course, properties like decidability are not covered by our framework. They must be investigated separately. vol. 26, n° 5, 1992 object) and the programs are its morphisms. Usually, the language will have further structure (e.g. λ-abstraction or fixpoints) which we would expect semantics to preserve (see Section 6), but here, for the sake of simplicity, we will refrain from assuming more than sequential composition and empty programs, which are the composition and identities, respectively, of \( L \). (But see [Jay90a, Jay90e, Jay91] for related treatments of handling more structure.) Thus, a denotational semantics for \( L \) is a functor \( L \to D \). Typically, \( D \) is the category of domains \( \text{Dom} \) or, alternatively, one of its full subcategories. For some authors (e.g. [BHA86]) the semantics is represented as a single domain \( +D_\ast \), which is the coalesced sum of the objects of \( D \), but this suppression of typing information obscures the functoriality of the semantics. Abstract interpretations are also functors, and may be thought of as non-standard denotational semantics. Each family of observable properties of \( L \) (e.g. \{“strict”, “no-information”\}) is naturally ordered by implication so that these properties, or observations, form an ordered category (Section 3.2). A behaviour maps programs (or perhaps denotations) into an ordered category of properties, or observations. The only behaviours of interest are those for which the property of a composite program is at least as strong as that determined by its parts, whence a behaviour is a lax functor (Section 3.3). Lax functors also arise as abstractions between abstract interpretations, e.g. the abstraction map \( \text{abs} \) for strictness of [BHA86]. Once the nature of behaviour is made explicit, the definition of correctness, and the dual notion of completeness, arise naturally from the ordering. To motivate our definition of behaviours as lax functors into ordered categories we will begin with simulation relations which generate an important class of behaviours. ### 3.1. Simulation Relation **Definition 3.1:** Let \( \mathcal{A} \) and \( \mathcal{B} \) be categories. A simulation relation \( R: \mathcal{A} \rightarrow \mathcal{B} \) from \( \mathcal{A} \) to \( \mathcal{B} \) consists of (i) a function, also denoted \( R \), from objects of \( \mathcal{A} \) to objects of \( \mathcal{B} \), and (ii) for each pair of objects \( A \) and \( A' \) of \( \mathcal{A} \), a relation \[ R_{A,A'}: \mathcal{A}(A,A') \rightarrow \mathcal{B}(RA,RA') \] between the homsets of \( \mathcal{A} \) and \( \mathcal{B} \) which together satisfy \[ \text{Informatique théorique et Applications/Theoretical Informatics and Applications} \] (iii) $g \in Rf$ and $g' \in Rf'$ implies $g; g' \in R(f; f')$ for morphisms $$A \xrightarrow{f} A' \xrightarrow{f'} A''$$ $$R A \xrightarrow{g} R A' \xrightarrow{g'} R A''$$ (iv) For any object $A$ in $\mathcal{A}$ then $\text{id}_{R A} \in R_{A,A}(\text{id}_A)$. Note that the simulation relations that are (partial) functions on the homsets are just (partial) functors. **Example 3.2:** A subcategory $\mathcal{A}'$ of $\mathcal{A}$ which contains all of the objects of $\mathcal{A}$ is called **slim**. $\mathcal{A}'$ may be thought of as the collection of morphisms (programs) having some property satisfied by identities and preserved by composition. Then there is a simulation relation $R : \mathcal{A} \rightarrow \mathcal{I}$ (where $\mathcal{I}$ is the terminal category with one object $*$ and whose sole morphism is $\text{id}_*$) defined by (i) $R A = \text{id}_*$ for all objects $A$ (ii) if $f : A \rightarrow B$ then $R_{A,B} f = \text{id}_*$ Conversely, such a simulation relation $R$ determines a slim subcategory of $\mathcal{A}$, whose morphisms are those related to $\text{id}_*$ by $R$. Thus simulation relations with codomain $\mathcal{I}$ directly correspond to program properties that are satisfied by identities and preserved by composition. More complicated behaviours for $\mathcal{A}$ are obtained by expanding the codomain of $R$. Simulation relations compose, and so can be used to model stepwise abstraction. Let $R : \mathcal{A} \rightarrow \mathcal{B}$ and $S : \mathcal{B} \rightarrow \mathcal{C}$ be simulation relations. Then $R; S : \mathcal{A} \rightarrow \mathcal{C}$ is the simulation relation defined by (i) $(R; S) A = \text{id}_* S (R (A))$ (ii) if $f : A \rightarrow B$ then $(R; S)_{A,B} f = \text{id}_* \cup \{S_{R A,B} g | g \in R_{A,B} f\}$. For example, $R$ may represent a translation into another programming language, whose behaviour is given by $S$ (Section 4.3). Note that $\mathcal{I}$ plays the role of observations for $R$ and also that of a language for $S$. This phenomenon leads us to model observations by categories. Mycroft and Jones [MJ86] modelled abstraction using logical relations which are like simulation relations, except that they use a relation between the objects of the two categories instead of a function. This additional freedom allows a single type to be abstracted to a family of types, which is counter-intuitive for abstraction, as is the fact that composites of logical relations are not necessarily logical relations. We will introduce a notion of abstraction that generalizes simulation relations while avoiding these problems (Definition 3.9). Categorical products are used to represent a pair of morphisms $B : \mathcal{L} \rightarrow \mathcal{O}$ and $B' : \mathcal{L} \rightarrow \mathcal{O}'$ by a single morphism $\langle B, B' \rangle : \mathcal{L} \rightarrow \mathcal{O} \times \mathcal{O}'$. The original morphisms are recovered by projection. Thus, if the morphisms are behaviours then the induced behaviour into the product represents their simultaneous observation. Therefore, we believe that any adequate category of behaviours must have products in order to allow the modular construction of complex behaviours from its components. This excludes the category of simulation relations: **Proposition 3.3:** The category of simulation relations does not have binary products. **Proof:** It suffices to show that there is no product of $1$ with itself, i.e. there is no category $\mathcal{X}$ such that for each category $\mathcal{A}$, the simulation relations $\mathcal{A} \times \rightarrow \mathcal{X}$ are in bijection with pairs of slim subcategories of $\mathcal{A}$ (Example 3.2). Assume that such a category $\mathcal{X}$ exists. $1$ has a unique slim subcategory. Thus there is a unique simulation relation $1 \times \rightarrow \mathcal{X}$, which forces $\mathcal{X}$ to have a unique object $\ast$, whose monoid of endomorphisms $\mathcal{X}(\ast, \ast)$ has a unique submonoid, i.e. is trivial. Thus $\mathcal{X}$ is isomorphic to $1$. On the other hand, simulation relations into $1$ are in bijection with individual slim subcategories, which yields a contradiction. $\square$ In order to guarantee the modularity of the framework, one must generalize from relations to lax functors between ordered categories, whose definition is our next goal. ### 3.2. Ordered Categories One abstract interpretation is correct (or safe) wrt another if the denotations of the former have weaker (fewer) properties. To capture this ordering of properties we interpret programming languages in ordered categories which generalize categories of domains. **Definition 3.4:** An ordered category is a category, whose homsets are partially ordered, with composition preserving the order, i.e. if $f \leq g : A \rightarrow B$ and $f' \leq g' : B \rightarrow C$ then $f; f' \leq g; g' : A \rightarrow C$. In short, an ordered category is a category enriched over partial orders [KS74]. Informatique théorique et Applications/Theoretical Informatics and Applications EXAMPLE 3.5. 1. Let $\mathbf{Dom}$ be the category of domains. With the pointwise ordering of continuous functions it is an ordered category. 2. Let $\mathcal{O}$ be any category. Then its power category $\mathbf{P}\mathcal{O}$ has the same objects as $\mathcal{O}$ with homsets given by the powersets of those of $\mathcal{O}$ $$\mathbf{P}\mathcal{O}(A, B) = \{f \mid f: A \to B\}$$ ordered by subset inclusion. The identity for an object $A$ is $\{id_A\}$ and composition is computed pointwise: given two morphisms $f = \{f_i \mid i \in I\}: A \to B$ and $g = \{g_j \mid j \in J\}: B \to C$ of $\mathbf{P}\mathcal{O}$ then $$f \circ g = \{f_i \circ g_j \mid i \in I, j \in J\}.$$ For instance, let $\mathbf{1}$ be the terminal category with one object $*$ and whose sole morphism is $id_*$. Then $\mathbf{2} = \mathbf{P}\mathbf{1}$ is the category with one object $*$ and two morphisms $\{\} \subseteq \{id_*\}$. If $\mathcal{O}$ is a category of properties then $\mathbf{P}\mathcal{O}$ is a category of families of properties, with larger morphisms representing more properties. 3. If $\mathcal{B}$ is an ordered category then $\mathcal{B}^{\text{co}}$, the local dual of $\mathcal{O}$, is the ordered category obtained by reversing the orders on the homsets. Thus the local duals of power categories represent stronger properties by smaller morphisms, as is usual. For example, in $\mathbf{2}^{\text{co}}$ we have $\{id_*\} \subseteq \{\}$. This ordered category will be used to represent a single property, and so deserves special terminology: we call it $\Omega$ and denote $\{id_*\}$ by $\perp$ and $\{\}$ by $\top$. 4. Any ordinary category $\mathcal{L}$ may be coerced to a discrete ordered category by giving its homsets the discrete order, i.e. $f \leq g$ iff $f = g$. Then, $\mathcal{L}^{\text{co}} = \mathcal{L}$. 5. Categories and simulation relations form an ordered category in the obvious way: Composition is defined in Section 3.1 and clearly, the identity functors are the identity simulation relations. Simulation relations are ordered by letting $$R \leq S : \mathcal{A} \times \mathcal{B}$$ if they agree on objects and if $R_{A,B}f \leq S_{A,B}f$ for every morphism $f : A \to B$. 3.3. Lax Functors Lax functors are a weak notion of functor appropriate to ordered categories and the study of behaviour. The laxness of the functor reflects the loss of information that arises when approximating the behaviour of a large program by composing the behaviours of its parts. **Definition 3.6:** Let $\mathcal{A}$ and $\mathcal{B}$ be ordered categories. A lax functor or behaviour $F: \mathcal{A} \to \mathcal{B}$ consists of (i) a function, also called $F$, from objects of $\mathcal{A}$ to objects of $\mathcal{B}$, and (ii) for each pair of objects $A$ and $A'$ of $\mathcal{A}$, an order preserving function $$F_{A, A'}: \mathcal{A}(A, A') \to \mathcal{B}(FA, FA')$$ which together satisfy (iii) given morphisms $f: A \to A'$ and $g: A' \to A''$ then $$F(f; g) \leq Ff; Fg$$ (iv) given an object $A$ of $\mathcal{A}$ then $$Fid_A \leq id_{FA}$$ If these inequalities are actually equalities then $F$ is an ordered or rigid functor. Also, if the inequalities (iii) and (iv) are reversed (so that $Ff; Fg \leq F(f; g)$ and $id_{FA} \leq Fid_A$) then $F$ is called a colax functor. Note, the colax functors $F: \mathcal{A} \to \mathcal{B}$ are just lax functors $\mathcal{A}^{co} \to \mathcal{B}^{co}$. Given a fixed start state, a typical behaviour for an imperative language would be simply to consider the effects of programs on a distinguished variable that we regard as input-output parameter. This behaviour is certainly not compositional (i.e. does not define a rigid functor), because side effects of the first program part on other variables may change the effect of the second program part. Thus the behaviour of a composite cannot be inferred from its component behaviours. However, it can be safely approximated by “no information”, which guarantees the properties of a lax functor. Another example is the strictness behaviour of functional languages. We will concentrate on this example in the sequel: **Example 3.7** 1. Strictness ([AH87]) for $\textbf{Dom}$ is given by the lax functor $B': \textbf{Dom} \to \Omega$ which maps all domains to $\bot$ and which is defined for a continuous function $f: X \to Y$ by $$B'f = \begin{cases} \bot & \text{if } f(\bot) = \bot \\ \top & \text{otherwise} \end{cases}$$ Subcategories of $\text{Dom}$ inherit this behaviour, by composition with the inclusion functor. 2. Let $\mathcal{L}$ be a programming language, i.e. a discrete ordered category. Then every denotational semantics $D : \mathcal{L} \rightarrow \text{Dom}$ yields a strictness behaviour for $\mathcal{L}$: $$D; B' : \mathcal{L} \rightarrow \Omega$$ 3. Any functor $F : \mathcal{A} \rightarrow \mathcal{B}$ is a lax functor if we regard $\mathcal{A}$ and $\mathcal{B}$ as discrete ordered categories. 4. Composites of lax functors are lax. They can be used for stepwise construction of behaviours. For example, if $T : \mathcal{L} \rightarrow \mathcal{L}'$ is a functor (say realizing a translation from $\mathcal{L}$ into $\mathcal{L}'$) and $B' : \mathcal{L}' \rightarrow \mathcal{B}$ is a behaviour for $\mathcal{L}'$ then $T; B'$ is a behaviour for $\mathcal{L}$ (see Section 4.3). 5. Let $R : \mathcal{L} \times \emptyset \rightarrow \emptyset$ be a simulation relation. It can be thought of as a colax functor $\mathcal{L} \times \emptyset \rightarrow \mathcal{P} \emptyset$ or equivalently, a lax functor $\mathcal{L} \times \emptyset \rightarrow \mathcal{P} \emptyset$ (since $\mathcal{L}$ is discrete). For example, slim subcategories of $\mathcal{L}$ correspond to lax functors $\mathcal{L} \rightarrow \Omega$. The ordered categories and lax functors themselves form an ordered category $\text{Ord}$, wherein $F \leq G : \mathcal{A} \rightarrow \mathcal{B}$ if $F$ and $G$ agree on objects and $Ff \leq Gf$ for each morphism $f$. In contrast to simulation relations, lax functors can represent simultaneous observations, as can be inferred from: **PROPOSITION 3.8:** $\text{Ord}$ has cartesian products. The cartesian product of ordered categories $\emptyset$ and $\emptyset'$ is their cartesian product $\emptyset \times \emptyset'$ as ordinary categories with pointwise ordering on the homsets. **Proof:** First note that the pointwise ordering in $\emptyset \times \emptyset'$ ensures that the ordinary projections $\pi_1 : \emptyset \times \emptyset' \rightarrow \emptyset$ and $\pi_2 : \emptyset \times \emptyset' \rightarrow \emptyset'$ are lax, in fact, rigid functors. Now let $F : \emptyset \rightarrow \emptyset$ and $F' : \emptyset \rightarrow \emptyset'$ be lax functors. Then, pointwise pairing of objects and morphisms defines a lax functor $$\langle F, F' \rangle : \emptyset \times \emptyset \rightarrow \emptyset \times \emptyset'$$ It is easy to establish that this lax functor has the universal properties that make $\emptyset \times \emptyset'$ into a categorical product in $\text{Ord}$. □ This proposition can be extended to arbitrary limits, so that general methods of combining observations are possible, e.g. pullbacks could be used to represent sharing constraints. Lax functors are the promised elaboration of simulation relations (c.f. Example 3.7-5), which constitute an adequate notion of abstraction between behaviours, and in particular, abstract interpretations: **Definition 3.9:** Let $B: \mathcal{L} \rightarrow \mathcal{O}$ and $B': \mathcal{L} \rightarrow \mathcal{O}'$ be behaviours for $\mathcal{L}$. An abstraction $F: B' \rightarrow B$ is a lax functor making the following diagram commute: \[ \begin{array}{ccc} B & \xrightarrow{L} & B' \\ \downarrow & & \downarrow \\ O & \xrightarrow{F} & O' \end{array} \] The behaviours for $\mathcal{L}$ and the abstractions between them form its category of behaviours, denoted $\mathbf{B}(\mathcal{L})$. It is also known as the comma category $\mathcal{L}/\mathbf{Ord}$. ### 3.4. Correctness and Completeness Let $B, B': \mathcal{L} \rightarrow \mathcal{O}$ be two behaviours. As established above, we consider small morphisms in $\mathcal{O}$ to be more informative than large ones. Thus $B'$ is **correct** (or **safe**) for $B$ if $$B \preceq B'$$ Dually, it is **complete** for $B$ if $B' \preceq B$. Correctness implies that $B'$ yields no more information than $B$, while completeness implies that it yields at least as much. Now, fix a programming language $\mathcal{L}$ which we regard as a discrete ordered category and consider the following diagram of lax functors: \[ \begin{array}{ccc} \mathcal{L} & \xrightarrow{L} & \mathcal{O} \\ \downarrow & & \downarrow \\ B & \xrightarrow{D} & B' \end{array} \] Then $D$ is **correct and complete** for $B$ iff there is a lax functor $B'$ such that $D; B'$ is both correct and complete for $B$, i.e. iff there exists a morphism... B' : D \leftrightarrow B in \mathcal{B}(\mathcal{D}). Of particular interest are decidable correct and complete abstract interpretations for B, because they specify complete data flow analysis algorithms for B. It does not make sense to define either correctness or completeness separately, without first specifying B', e.g. strictness for domains, since almost every abstract interpretation D would be correct (complete) for some behaviour on \mathcal{D}. This is true in all approaches, though often the behaviour is merely implicit. Logical relations improve on the general situation, but still account for B' indirectly, at the technical level of domain equations [JN90, MJ86]. Here B' is accounted for directly, which yields greater clarity and flexibility: D is correct (complete) for B if D; B' is correct (complete) for B. This definition of correctness (completeness) is transitive and non-symmetric, as can be illustrated by the following example, involving higher-order strictness analysis. The formalism used here is new: the proofs are in the original paper [BHA86]. Let \mathcal{L} be a programming language generated from a single type A and equipped with a denotational semantics D : \mathcal{L} \to \mathcal{D}, where \mathcal{D} is the full subcategory of \textbf{Dom} generated by the image of A in \textbf{Dom}. The standard strictness behaviour B' for \mathcal{D} inherited from \textbf{Dom} (Example 3.7(2)) yields strictness for \mathcal{L} via B =_{df} D; B' Thus, D is correct and complete for B by definition. Let \mathcal{B} be the full subcategory of domains generated by 2 =_{df} \{ \bot \leq \top \}. There is an abstraction abs : \mathcal{D} \leftrightarrow \mathcal{B} which is correct for the strictness behaviour B'. From this (or directly) can be constructed a (smallest) rigid functor (an abstract interpretation) D' : \mathcal{L} \to \mathcal{B} which is correct for D; abs. A short dia- gram-chase now shows that $D'$ is also correct for $B$ since $D' \geq D$; $abs; B' \geq D; B' = B$. Correctness is the critical notion for abstract interpretation, because the safety of a program transformation depends on the correctness of the properties it is based on. Completeness naturally arises as the exact dual of correctness in our framework. Of course, for "standard behaviours", complete abstract interpretations are usually undecidable, and so completeness was neglected. However, there may well be decidable abstract interpretation for "nonstandard behaviours". Thus, completeness can express useful minimal requirements for data flow analysis algorithms. Further, there are situations, where completeness is critical. For example, in data refinement (e.g. [HJ88]) an implementation must have at least the properties of the specifying abstract data type. We conjecture that these properties define a behaviour in our framework for which successful data refinement is simply completeness. 4. CHARACTERIZATION OF BEHAVIOUR We wish to construct an abstract interpretation from a behaviour. Each behaviour yields an equivalence relation on the programs obtained by relating those programs which behave identically. Abstract interpretations are behaviours that are characterized by yielding a congruence relation. The point of the characterization functor is to associate to each behaviour an abstract interpretation that corresponds to the largest congruence which refines the equivalence relation of the behaviour, i.e. which relates programs that have the same behaviour in any context. This yields a categorical congruence (see below) on the category of programs, whose quotient will be the desired characterization of the original behaviour. 4.1. The Characterization Functor **Definition 4.1:** Let $\mathcal{C}$ be a category. A congruence on $\mathcal{C}$ [Mac71, BW85] is a family $E_{A,B}$ of equivalence relations on the homsets $\mathcal{C}(A,B)$ (where $E_{A,B}(f,f')$ is written $f \equiv f'$ when the congruence $E$ is understood) satisfying, for $f,f' : A \rightarrow B$ and $g,g' : B \rightarrow C$ $$f \equiv f' \text{ and } g \equiv g' \Rightarrow f \equiv f'; g \equiv g'.$$ Informatique théorique et Applications/Theoretical Informatics and Applications Given a congruence \( E \) on a category \( \mathcal{C} \) there is a quotient category \( \mathcal{C}(E) \) having the same objects as \( \mathcal{C} \) whose morphisms are the equivalence classes of morphisms in \( \mathcal{C} \). Of course, there is also a quotient functor \( Q : \mathcal{C} \to \mathcal{C}(E) \), which maps each morphism to its congruence class. It is injective on objects (preserves data-types) and is also surjective on objects and morphisms (is computationally relevant). The category of quotients \( \mathcal{Q}(\mathcal{L}) \) is the full subcategory of \( \mathcal{B}(\mathcal{L}) \) of quotient functors, where we consider quotient functors as lax functors between discrete ordered categories (see Example 3.3.3). The universal property of quotient functors is given by **Proposition 4.2:** Let \( E \) be a congruence on \( \mathcal{C} \) with quotient \( Q \). If \( H : \mathcal{C} \to \mathcal{C}(E) \) is a lax functor such that for all morphisms \( f, f' : A \to B \) in \( \mathcal{C} \) \[ f \equiv f' \implies Hf = Hf' \] then there is a unique lax functor \( H' : \mathcal{C}(E) \to \mathcal{C(\mathcal{C}(E))} \) satisfying \( Q ; H' = H \). Moreover, if \( H \) is a rigid functor then \( H' \) is a rigid functor too. **Proof:** (Sketch) \( H' \) agrees with \( H \) on objects, and maps a congruence class \( [f] \) of morphisms to \( Hf \). \( \Box \) **Example 4.3.** Let \( D : \mathcal{L} \to \mathcal{B} \) be a denotational semantics and define two parallel morphisms \( f \) and \( f' \) to be denotationally equivalent, written \( E_D(f, f') \), if \( Df = Df' \). Then \( D \) factorizes through the corresponding quotient functor \( \mathcal{L}D \) in a unique way: \[ \begin{array}{ccc} \mathcal{L} & \xrightarrow{D} & \mathcal{L}(E_D) \\ \downarrow & & \downarrow \\ \mathcal{D} & \xrightarrow{F} & \mathcal{L}(E_D) \\ \end{array} \] vol. 26, n° 5, 1992 We have: **Proposition 4.4:** \( \mathcal{Q}(\mathcal{L}) \) is a meet semi-lattice. **Proof:** Let \( \mathcal{Q} : \mathcal{L} \to \mathcal{U} \) and \( \mathcal{Q}' : \mathcal{L} \to \mathcal{U}' \) be quotient functors arising from congruences \( E \) and \( E' \) respectively. It follows from Proposition 4.2 that there is at most one lax functor \( F : \mathcal{U} \leftrightarrow \mathcal{U}' \) satisfying \( \mathcal{Q} = \mathcal{Q}' \), which must then be a quotient. We then say \( \mathcal{Q} \leq \mathcal{Q}' \). Such an \( F \) exists iff \( E \subseteq E' \) that is, \( E(f, f') \) implies \( E'(f, f') \). The meet of \( \mathcal{Q} \) and \( \mathcal{Q}' \) (their categorical cartesian product) is the quotient corresponding to \( E \cap E' \). □ **Definition 4.5:** Let \( B : \mathcal{L} \to \mathcal{O} \) be a behaviour. Morphisms \( f, f' : A \to B \) in \( \mathcal{L} \) are behaviourally congruent if for all morphisms \( g : A' \to A \) and \( h : B \to B' \) we have \[ B(g; f; h) = B(g; f'; h), \] that is \( f \) and \( f' \) have the same behaviour in every (input-output) context. Then the quotient functor \( \mathcal{Q}B : \mathcal{L} \to \mathcal{U} \) corresponding to this congruence is the characterization of the behaviour \( B \). Applying Proposition 4.2 to the behavioural congruence on \( \mathcal{L} \) generated by \( B \) with \( H = B \) shows that \( B = \mathcal{Q}B \varepsilon_B \) for some behaviour \( \varepsilon_B \), i.e. \( \mathcal{Q}B \) is correct and complete for \( B \). This characterization of behaviours is the object part of the functor \( \mathcal{Q} \) specified in the following theorem: **Theorem 4.6.** (Characterization Theorem): \( \mathcal{Q}(\mathcal{L}) \) is a coreflective subcategory of \( \mathcal{B}(\mathcal{L}) \), i.e. the inclusion of \( \mathcal{Q}(\mathcal{L}) \) in \( \mathcal{B}(\mathcal{L}) \) has a right adjoint \( \mathcal{Q} \), called the characterization functor. **Proof:** Let \( B : \mathcal{L} \to \mathcal{O} \) be a behaviour. Then its image under \( \mathcal{Q} \) is defined to be the quotient functor \( \mathcal{Q}B : \mathcal{L} \to \mathcal{U} \) as described in Definition 4.5. The counit of the coreflection is $\varepsilon_B : \mathcal{Q}B \timesrightarrow{\sim} B$. To see its universal property, let $\mathcal{Q}' : \mathcal{L} \rightarrow \mathcal{U}'$ be another quotient functor which is correct and complete for $B$, i.e. $\mathcal{Q}' : B' = B$ for some behaviour $B'$. Then $\mathcal{Q}' f = \mathcal{Q}' g$ implies that $f$ and $g$ are behaviourally congruent since $\mathcal{Q}'$ is a functor. Thus, $\mathcal{Q} B (f) = \mathcal{Q} B (g)$ and so applying Proposition 4.2 with $\mathcal{Q}'$ as quotient shows there is a unique functor $F : \mathcal{U}' \rightarrow \mathcal{U}$ making all triangles in the diagram above commute. □ Note that the universal property is more general than it may at first appear, since Example 4.3 shows that every abstract interpretation factorises through some quotient functor. The Characterization Theorem 4.6 generalizes the well-known result that there exists a unique largest congruence relation in every equivalence relation. Let us now illustrate the situation obtained so far by means of strictness analysis. 4.2. Strictness Analysis The behaviour of a program is usually given by the behaviour of its denotation, but may also be determined in other ways, e.g. by first manipulating the syntax. Here both methods are used to obtain strictness analyses [AH87] for some simple languages which illustrate the main features of this framework. First, we consider the behaviour of the denotations. Let $\mathcal{Q}$ be the full subcategory of domains generated by $\mathbb{N}$, the flat natural numbers. Its behaviour $B' : \mathcal{Q} \timesrightarrow{\sim} \Omega$ is induced by the strictness behaviour of $\text{Dom}$ (Example 3.7(2)). The structure of $\text{Dom}$ is so rich that it prevents identifications through behavioural congruence (unlike many languages): **Lemma 4.7:** The characterization for the strictness behaviour $B' : \text{Dom} \timesrightarrow{\sim} \Omega$ on domains is the identity. **Proof:** Let $f, g : D \rightarrow D'$ be continuous functions which are behaviourally congruent. Given $x \in D$ let $h : D' \rightarrow 2$ be the unique continuous function such that $h^{-1}(\bot)$ is the down-closure of $f(x)$ in $D'$. Then $f \equiv g$ implies $B'(hf(x)) = B'(hg(x))$ whence $g(x) \leq f(x)$. By symmetry, $f(x) \leq g(x)$ and so $f(x) = g(x)$. □ Consider a simply typed $\lambda$-calculus which is freely generated by a type $N$ (of natural numbers) equipped with zero $0 : N$ and successor $s : N \rightarrow N$, and perhaps some other constants. Let $\mathcal{L}$ be the corresponding category whose objects are the types, and whose morphisms $X \rightarrow Y$ are equivalence classes under $\alpha$-conversions of terms $t : Y$ equipped with a context $\Gamma$ of type $X$. Additional conversions (e.g. the $\beta$- and $\eta$-conversions which would make the category cartesian closed [LS86]) are not imposed since they are not syntactic, but arise from the behaviour. The standard denotational semantics for $L$ is given by $D : L \rightarrow \mathcal{D}$, where $N$ is mapped to $N_\bot$ and constants, including zero and successor, receive their standard interpretation as lifted functions (though non-deterministic choice requires powerdomains, see below). The behaviour for $L$ is then given by $B = d_f D; B' : L \rightarrow \Omega$. The constant numerals of $L$, e.g. $0, s0, \ldots$, when regarded as morphisms $N \rightarrow N$ with free variable $x : N$, e.g. $\lambda x.0, \lambda x.s0, \ldots$ are all non-strict, while the denotation of a variable $x$ is the identity, which is strict. Thus, numerals and variables are not behaviourally congruent. If the language is pure, i.e. there are no other constant symbols, then an inductive argument shows that the constant numerals are all behaviourally congruent. However, in the presence of additional constants, more distinctions can be made. Consider, for example (i) addition, $+: N \times N \rightarrow N$ (ii) bottom, $\bot : N$ (iii) non-deterministic choice, $| : N \times N \rightarrow N$ (The denotation of non-deterministic choice requires powerdomains, though its strictness behaviour is clear: it is strict iff both of its arguments are.) There are now six separate congruence classes of morphisms $N \times N \rightarrow N$ (equivalently, $N \rightarrow N \rightarrow N$), represented by the following $\lambda$-terms: \[ \begin{align*} \lambda xy.0 \\ \;| \\ \lambda xy.x | y \\ \lambda xy.e | y \\ \lambda xy.e + y \\ \;| \\ \lambda xy. \bot \end{align*} \] They correspond to the strictness values of Burn, Hankin and Abramsky [BHA86] for this type, which form the domain $2 \rightarrow 2 \rightarrow 2$. However, if the language has fewer constants then there are fewer congruence classes, which is not reflected in their model since it is independent of the language. Conversely, more constants may yield more distinctions. For example, let $L$ also have a conditional, $c : N \to N \to N \to N$, whose denotation is given by $$D(c) b m n = \begin{cases} m & \text{if } b = 1 \\ n & \text{if } b = 0 \\ \bot & \text{otherwise} \end{cases}$$ Then truth values (where true and false are represented by 1 and 0 respectively) are distinguished from each other and from the other constant numerals. By contrast, their abstraction $\text{abs}$ (Section 3.4) identifies all the numerals. Thus $\text{abs}$ is incomplete for $B$, or more precisely, $D; \text{abs}; B' > B$. Note that often the strictness of first-order functions is all that we are interested in. However, the characterization of the corresponding behaviour is the same as that of $B$ since higher-order morphisms yield first-order morphisms in appropriate contexts. Thus, the behaviour of interest may be extremely simple, and yet specify complicated abstract interpretations. 4.3. Compositionality of the Characterization Behaviours may be constructed by means of simultaneous observation $\langle B, B' \rangle$ and step-wise abstraction $B'; B''$. In this section such structure is used to construct the characterization of behaviours hierarchically by means of two corollaries to the Characterization Theorem 4.6. **Corollary 4.8 (Modularity):** $\EuScript{D}$ preserves all limits in $\EuScript{B}(L)$. In particular, given two behaviours $B : L \to \mathcal{D}$ and $B' : L \to \mathcal{D}'$ then the characterization $Q \langle B, B' \rangle$ of their simultaneous observation is the meet of $D B$ and $D B'$. **Proof:** Right adjoints preserve limits. $\Box$ This result generalizes the well-known fact that the intersection of two congruence relations is a congruence relation itself. **Example 4.9.** Let $L$ be the richest language considered in Section 4.2. For $m > 0$ we define a non-standard denotational semantics $D_m : L \to \mathcal{D}$ which differs from $D$ in that $D(s)$ is the successor $\text{mod}_m$, i.e. the lifted function $n \mapsto n + 1 \ (\text{mod} \ m)$. Let $B_m = s_f D_m$; $B'$ be the corresponding behaviour of $L$. Then the congruence classes of numerals are those of mod-$m$ arithmetic and $\{ \bot \}$. These cannot be identified since every numeral can be mapped to the congruence class of 0 (= “false”) by sufficient applications of $s$. Simultaneous observation of $B_m$ and $B_n$ is characterized by $\mathcal{B}B_q$, where $q$ is the least common multiple of $m$ and $n$. Note that $\mathcal{B}B_q$ distinguishes only those programs which need to be distinguished for realizing simultaneous mod-$m$ and mod-$n$ observations. **Corollary 4.10 (Functoriality):** Let $B = B'$; $B'' : \mathcal{L} \to \mathcal{C} \to \mathcal{C}'$ be a composite of lax functors. Then we have $$\mathcal{B}B = \mathcal{B}B'; \mathcal{B}_{B'}(B'') = \mathcal{B}B'; \mathcal{B}(e_B; B'')$$ In particular, $\mathcal{B}B$ factors through $\mathcal{B}B'$. **Proof:** The lax functor $B'' : B' \to B$ is a morphism of $B(\mathcal{L})$. Since functors preserve domain and codomain of morphisms, we have $\mathcal{B}_{B'}(B'') : \mathcal{B}B' \to \mathcal{B}B$, which yields the result. □ Stepwise abstraction of behaviours arises naturally in the search for the right level of abstraction. Consider data flow analysis: decidable abstract interpretations directly specify data flow analysis algorithms. Usually however, the abstract interpretation associated with a certain data flow problem is not decidable. Thus further abstractions are necessary. A common such abstraction step is to interpret conditional branching by non-deterministic choice. It can be realized by a syntactic translation as in the following. **Example 4.11:** The conditional $cxyz$ can be translated into the non-deterministic choice $y|z$. This syntactic translation determines a functor $T : \mathcal{L} \to \mathcal{L}$ (which is not mirrored by any endo-functor on the category of denotations). It yields a new behaviour $B_1 = T; B$ on $\mathcal{L}$ which is correct for $B$ without being complete for it. Thus, given $e_{B_1}$ then $\mathcal{B}B_1$ is correct for $B$ and complete for $B_1$. Now functoriality shows that $\mathcal{B}B_1$ decomposes as $\mathcal{B}(T); \mathcal{B}(e_T; B)$, which may thus simplify its calculation. 5. CONCLUSION We have presented a language independent framework for abstract interpretation that explicitly deals with behaviours of programs, with the benefit that the notion of correctness is simplified and the notion of completeness naturally arises as its dual. These improvements do not require considering observations (properties) as morphisms of a category. The usual relational approach with sets of observations would do. However, our framework additionally supports the hierarchical development of abstract interpretations and data flow analysis algorithms along the structure of the specifying program behaviour, by means of stepwise and modular refinement in the categorical framework. All these features have been illustrated by means of some simple strictness analyses. 6. FUTURE WORK In this paper the characterization of a behaviour is universal amongst quotient functors. It therefore focuses on substitution as a language construct and on datatype preserving abstract interpretations. This suggests two direc- tions for generalizations. First, other language constructs such as fixpoints, products, general limits, or \( \lambda \)-abstraction should be considered. We believe that the development in this paper can be reformulated for quotient functors that also preserve these language constructs, to achieve this generalization. Second, one could generalize to abstract interpretations that do not neces- sarily preserve data types. Here an approach using "coequalizers" rather than "quotient functors" seems appropriate. ACKNOWLEDGEMENTS The development of this paper has been strongly influenced by discussions with Eugenio Moggi. Furthermore, we would like to thank Yves Lafont, Don Sannella and Terry Stroup for helpful comments, and Norbert Götz for giving us a hand in typing the manuscript. REFERENCES [AH87] S. Abramsky and C. L. Hankin, eds, Abstract Interpretation of Declara- 7, pp. 249-278. [BW85] M. Barr and C. Wells, Toposes, Triples and Theories, Springer Verlag, 1985. [CC77a] P. Cousot and R. Cousot, Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approxima- [CC77b] P. Cousot and R. Cousot, Automatic Synthesis of Optimal Invariant Assertions: Mathematical Foundations. A.C.M. Sigplan Notices, 1977, 12, pp. 1-12. [CC79] P. Cousot and R. Cousot, Systematic Design of Program Analysis
{"Source-Url": "http://www.numdam.org/article/ITA_1992__26_5_403_0.pdf", "len_cl100k_base": 12633, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 51671, "total-output-tokens": 14864, "length": "2e13", "weborganizer": {"__label__adult": 0.00046634674072265625, "__label__art_design": 0.0005741119384765625, "__label__crime_law": 0.0003862380981445313, "__label__education_jobs": 0.0009946823120117188, "__label__entertainment": 9.846687316894533e-05, "__label__fashion_beauty": 0.0002124309539794922, "__label__finance_business": 0.000316619873046875, "__label__food_dining": 0.0005879402160644531, "__label__games": 0.0008730888366699219, "__label__hardware": 0.00102996826171875, "__label__health": 0.0008878707885742188, "__label__history": 0.0003504753112792969, "__label__home_hobbies": 0.0001691579818725586, "__label__industrial": 0.0005974769592285156, "__label__literature": 0.0008287429809570312, "__label__politics": 0.0003333091735839844, "__label__religion": 0.0008106231689453125, "__label__science_tech": 0.05780029296875, "__label__social_life": 0.00011730194091796876, "__label__software": 0.00531005859375, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.00036263465881347656, "__label__transportation": 0.0008416175842285156, "__label__travel": 0.0002130270004272461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50562, 0.0241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50562, 0.69323]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50562, 0.81144]], "google_gemma-3-12b-it_contains_pii": [[0, 628, false], [628, 2795, null], [2795, 5623, null], [5623, 7704, null], [7704, 9413, null], [9413, 11629, null], [11629, 14269, null], [14269, 16814, null], [16814, 19343, null], [19343, 21731, null], [21731, 23812, null], [23812, 26622, null], [26622, 28307, null], [28307, 30237, null], [30237, 32529, null], [32529, 34449, null], [34449, 36650, null], [36650, 39520, null], [39520, 41369, null], [41369, 43903, null], [43903, 46468, null], [46468, 48671, null], [48671, 50562, null]], "google_gemma-3-12b-it_is_public_document": [[0, 628, true], [628, 2795, null], [2795, 5623, null], [5623, 7704, null], [7704, 9413, null], [9413, 11629, null], [11629, 14269, null], [14269, 16814, null], [16814, 19343, null], [19343, 21731, null], [21731, 23812, null], [23812, 26622, null], [26622, 28307, null], [28307, 30237, null], [30237, 32529, null], [32529, 34449, null], [34449, 36650, null], [36650, 39520, null], [39520, 41369, null], [41369, 43903, null], [43903, 46468, null], [46468, 48671, null], [48671, 50562, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50562, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50562, null]], "pdf_page_numbers": [[0, 628, 1], [628, 2795, 2], [2795, 5623, 3], [5623, 7704, 4], [7704, 9413, 5], [9413, 11629, 6], [11629, 14269, 7], [14269, 16814, 8], [16814, 19343, 9], [19343, 21731, 10], [21731, 23812, 11], [23812, 26622, 12], [26622, 28307, 13], [28307, 30237, 14], [30237, 32529, 15], [32529, 34449, 16], [34449, 36650, 17], [36650, 39520, 18], [39520, 41369, 19], [41369, 43903, 20], [43903, 46468, 21], [46468, 48671, 22], [48671, 50562, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50562, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b9f7fe029309e7a9208f0e11593548bb8ac6f8db
Top-k Web Service Compositions using Fuzzy Dominance Relationship Karim Benouaret, Djamal Benslimane, Allel Hadjali, Mahmoud Barhamgi To cite this version: HAL Id: hal-00670706 https://hal.archives-ouvertes.fr/hal-00670706 Submitted on 15 Feb 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Top-k Web Service Compositions using Fuzzy Dominance Relationship Karim Benouaret 1, Djamal Benslimane 1, Allel Hadjali 2, Mahmoud Barhamgi 1 1LIRIS, Claude Bernard Lyon1 University, 69622, Villeurbanne, France firstname.lastname@liris.cnrs.fr 2Enssat, University of Rennes 1, Lannion, France allel.hadjali@enssat.fr Abstract Data Web service composition is a powerful means to answer users’ complex queries. User preferences are a key aspect that must be taken into account in the composition scheme. In this paper, we present an approach to automatically compose Data Web services while taking into account the user preferences. User preferences are modeled thanks to fuzzy sets. We use an RDF query rewriting algorithm to determine the relevant services. The fuzzy constraints of the relevant services are matched to those of the query using a set of matching methods. We rank-order services using a fuzzification of Pareto dominance, then compute the top-k service compositions. We propose also a method to improve the diversity of returned compositions while maintaining as possible the compositions with the highest scores. Finally, we present a thorough experimental study of our approach. 1. Introduction Recent years have witnessed a growing interest in using Web services as a reliable means for data publishing and sharing among enterprises [5]. This type of services is known as Data Web services, where services correspond to calls over the business objects (e.g., Customer) in the underlying data sources. In this context, Data Web Service Composition is a powerful solution to answer the user’s complex queries by combining primitive simple Data Web services to realize value-added services on top of existing ones. Users’ preferences is another key aspect that must be considered in the composition process. In this respect, the fuzzy sets theory [4] has been proved to be a viable solution to model preferences. Fuzzy sets are very well suited to the interpretation of linguistic terms, which constitute a convenient way for users to express their preferences. For example, when expressing preferences about the “price”, users often employ fuzzy terms like “cheap”, “affordable”, etc. As services and providers proliferate, a large number of candidate compositions that would use different services may be used to answer the same query. It is thus important to set up an effective composition framework that would identify and retrieve the most relevant services and return the top-k compositions according to the user preferences. Example: Consider the services from the car e-commerce in Table-1. The symbols “$” and “?” denote inputs and outputs, respectively. Services providing the same functionality belong to the same service class. For instance, $S_{21}$, $S_{22}$, $S_{23}$ and $S_{24}$ belong to the same class $S_2$. Each service has its constraints on the data it manipulates. For instance, the cars returned by $S_{21}$ are of cheap price and short warranty. <table> <thead> <tr> <th>Table 1. Example of Data Services</th> </tr> </thead> <tbody> <tr> <td>Service</td> </tr> <tr> <td>$S_{11}(x, y)$</td> </tr> <tr> <td>$S_{21}(x, y, z, t)$</td> </tr> <tr> <td>$S_{22}(x, y, z, t)$</td> </tr> <tr> <td>$S_{23}(x, y, z, t)$</td> </tr> <tr> <td>$S_{24}(x, y, z, t)$</td> </tr> <tr> <td>$S_{23}(x, y, z)$</td> </tr> <tr> <td>$S_{34}(x, y, z)$</td> </tr> </tbody> </table> Let us now assume that the user Bob would like to submit the following query $Q_1$: “return the French cars, preferably at an affordable price with a warranty around 18 months and having a normal power with a medium consumption”. Bob will have to invoke $S_{11}$ to retrieve the French automakers, he then invoke one or more of the services to retrieve the French cars along with their prices and warranties, finally he will invoke one or more of the services $S_{21}, S_{22}, S_{23}, S_{24}$ to retrieve the power and the consumption of retrieved cars. This manual process is painstaking. It raises the following challenges: (ii) how to understand the semantics of the published services to select the relevant ones that can contribute to answering the query at hand; (ii) how to retain the most relevant services that better satisfy the user preferences; and (iii) how to generate the best compositions that satisfy the whole user query. **Contributions:** We already tackled the first challenge by proposing in [2] an RDF query rewriting approach that generates automatically the Data service compositions (which does not include any preference constraints). In this paper, we focus on the second and third challenges. We select services that cover a part of the query even if their constraints match only partially the user preference constraints. Different methods are investigated to compute the matching degrees between the services’ constraints and the preferences involved in the query. In order to select the most relevant services, a multicriteria fuzzy dominance relationship is proposed to rank-order services. The selected services are then used to find the top-$k$ compositions that answer the user query. To avoid returning similar compositions, we also propose a diversified top-$k$ service composition method that aims to both improve the diversity of top-$k$ selection and maintain as possible the services with better scores. The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 formally defines the studied problem. Section 4 describes the proposed fuzzy dominance and ranking criteria. Section 5 is devoted to the both top-$k$ and diversified top-$k$ compositions generation methods. Section 6 presents the global architecture of our implemented composition system and reports a thorough experimental evaluation. Finally, Section 7 concludes the paper. ## 2 Related work Preferences in Web service selection/composition have received much attention by many researchers. In [12], the authors use a qualitative graphical representation of preference, CP-nets, to deal with services selection in terms of user preferences. This approach can reason about a user’s incomplete and constrained preference. In [7], a method to rank semantic web services is proposed. It is based on computing the matching degree between a set of requested NFPs (Non-Functional Properties) and a set of NFPs offered by the discovered Web services. NFPs cover QoS aspects, but also other business-related properties such as pricing and insurance. Semantic annotations are used for describing NFPs and the ranking process is achieved using some automatic reasoning techniques that exploit the annotations. However, the problem of composition is not addressed in these works. Agarwal and Lamparter [1] proposed an approach for an automated selection of Web service for composition. Web service combinations can be compared with each other and ranked according to the user preferences. Preferences are modeled as a fuzzy IF-THEN rules. The IF part contains fuzzy descriptions of the various properties of a service and the THEN part is one of the fuzzy characterizations of a special concept called **Rank**. A fuzzy rule describes which combination of attribute values a user is willing to accept to which degree, where attribute values and degree of acceptance are fuzzy sets. ServiceRank [13] considers the QoS aspects as well as the social perspectives of services. Services that have good QoSs and are frequently invoked by others are more trusted by the community and will be assigned high ranks. In [11], the authors propose a system for conducting qualitative Web service selection in the presence of incomplete or conflicting user preferences. The paradigm of CP-nets is used to model user preferences. The system utilizes the history of users to amend the preferences of active users, thus improving the results of service selection. The work the most related to our proposal is [10], where the authors consider dominance relationships between web services based on their degrees of match to a given request in order to rank available services. Distinct scores based on the notion of dominance are defined for assessing when a service is objectively interesting. However, that work considers only selection of single services, without dealing with neither the problem of composition nor the user preferences. Finally, in [9], the authors propose a method to diversify Web service search results in order to deal with that have different, but unknown, preferences. The proposed method focuses on QoS parameters with non-numeric values, for which no ordering can be defined. However, this method provides the same services to all users, also the problem of composition is not addressed. In our approach the diversified compositions vary according to the user. ## 3 Preference-based composition model ### 3.1 Preference Queries Users express their preference queries over domain ontologies using a slight modification of SPARQL. For instance, query $Q_1$ given in Section 1 is expressed as follows: ``` URL=http://vm.liris.cnrs.fr:36880/MembershipFunctions/ SELECT ?n ?pr ?w ?pw ?co WHERE{?Au rdf:type AutoMaker ?Au hasCountry 'France' ?Au makes ?C 7C rdf:type Car 7C hasName ?n 7C hasPrice ?pr 7C hasWarranty ?w 7C hasPower ?pw 7C hasConsumption ?co) Preferring {?pr is 'URL/AffordablesService', ?w is 'URL/around(18)Service', ?pw is 'URL/NormalService', ?co is 'URL/MediumService'} ``` 145 Preferences are modeled using fuzzy sets [4]. Formally, a fuzzy set $F$ on a referential $X$ is characterized by a membership function $\mu_F : X \rightarrow [0, 1]$, where $\mu_F(x)$ represents the degree of membership of $x$ in $F$. Namely having $x, y \in F$, $x$ is more preferable than $y$ iff $\mu_F(x) > \mu_F(y)$. In practice, membership functions are often of trapezoidal form represented by the quadruplet $(A, B, a, b)$ as shown in Figure 1. A regular interval $[A, B]$ can be seen as a fuzzy set represented by the quadruplet $(A, B, 0, 0)$. Membership functions are implemented as Web services and can be shared by users. They are used in the Preferring clause of the query by mentioning the URI of the implementing service. More details are provided in section 6. ![Figure 1. Fuzzy value representation](image) ### 3.2 Data services Data services are partitioned into different classes. A class $S_j$ compromises services providing the same functionality. A Data service $S_{ji}$ of class $S_j$ is described as a predicate $S_{ji}(X_j, Y_j, Z_j, C_{ji}) > - < G_j(X_j, Y_j, Z_j, C_{ji}) >$ where $X_j$ and $Y_j$ are the sets of input and output variables of $S_{ji}$, respectively. Input and output variables are also called distinguished variables. They are prefixed with the symbols “$\$” and “?”, respectively. $G_j(X_j, Y_j, Z_j)$ represents the functionality of the service. This functionality is described as a semantic relationship between input and output variables. $Z_j$ is the set of existential variables relating $X_j$ and $Y_j$. $C_{ji} = \{C_{ji1}, ..., C_{jin}\}$ is a set of constraints expressed as intervals or fuzzy sets on $X_j, Y_j$ or $Z_j$ variables. $X_j$ and $Y_j$ variables are defined in the WSDL description of services. Functionality $G_j$ and constraints $C_{ji}$ of a service $S_{ji}$ are added to the WSDL descriptions in the form of annotations. The annotations are represented in the form of SPARQL queries. For instance, the following query illustrates the functionality and constraints of $S_{21}$: ``` URL=http://vm.iris.cnrs.fr:36880/MembershipFunctions/ RDFQuery{SELECT ?y ?z ?t WHERE{?Au rdf:type AutoMaker ?Au name $x ?C hasPrice ?z ?C hasWarranty ?t)) CONSTRANTS{?z is 'URL/CheapService' ?z is 'URL/ShortService'} ``` The `SELECT` and `WHERE` clauses define the functionality of $S_{21}$. The `CONSTRANTS` clause gives the constraints of $S_{21}$. ### 3.3 Discovering Relevant Services Let $Q$ be a preference query. We use our RDF query rewriting algorithm [2] to discover the parts of $Q$ that are covered by each service—recall that in the general case services may match only parts (referred to by $q_j$ of $Q$). A part $q_j$ is covered by one or more services that constitute a class of relevant services $S_j$. A service $S_{ji} \in S_j$ is said to be relevant to $Q$ iff the functionality of $S_{ji}$ completely matches a part $q_j$ and its constraints match completely or partially the preference constraints of $q_j$. We use a set of methods $M = \{m_1, ..., m_n\}$ (e.g., constraints inclusion operators) resulting in different degrees of match for each service. Two classes of constraints inclusion operator are considered. Let $C \equiv x \in E$ and $C' \equiv x \in F$ be two constraints. - **Quantitative method (QM).** $D_{mq}(C \subseteq C') = \frac{|E \cap F|}{|E|} = \frac{\sum_{x \in X} \mu_{F}(x) \mu_{E}(x)}{\sum_{x \in X} \mu_{E}(x)}$ where the intersection is interpreted by a t-norm operator $T$ [4]. For instances, $T = \text{"min"}$ (M-QM) and $T = \text{"product"}$ (P-QM). - **Logic method (LM).** $D_{mq}(C \subseteq C') = \min_{x \in X} (\mu_{E}(x) \rightarrow_{I} \mu_{F}(x))$ where $\rightarrow_{I}$ stands for a fuzzy implication [4]. For instances, Godel (G-LM), and Lukasiewicz (L-LM). ### Table 2. Services and their matching degrees <table> <thead> <tr> <th>$S_{ji}$</th> <th>$q_j$</th> <th>M-QM</th> <th>P-QM</th> <th>G-LM</th> <th>L-LM</th> </tr> </thead> <tbody> <tr> <td>$S_{11}$</td> <td>$q_1$</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>$S_{21}$</td> <td>$q_2$</td> <td>$(1, 0.57)$</td> <td>$(0.98, 0.057)$</td> <td>$(1, 0)$</td> <td>$(0.80, 0)$</td> </tr> <tr> <td>$S_{22}$</td> <td>$(0.80, 1)$</td> <td>$(0.77, 1)$</td> <td>$(0.1)$</td> <td>$(0.50, 1)$</td> <td></td> </tr> <tr> <td>$S_{23}$</td> <td>$(0.20, 0.16)$</td> <td>$(0.13, 0.13)$</td> <td>$(0.0)$</td> <td>$(0)$</td> <td></td> </tr> <tr> <td>$S_{24}$</td> <td>$(0.83, 0.88)$</td> <td>$(0.83, 0.88)$</td> <td>$(0.60, 0.50)$</td> <td>$(0.60, 0.50)$</td> <td></td> </tr> <tr> <td>$S_{31}$</td> <td>$(0.50, 0.36)$</td> <td>$(0.46, 0.32)$</td> <td>$(0.0)$</td> <td>$(0)$</td> <td></td> </tr> <tr> <td>$S_{32}$</td> <td>$(0.79, 0.75)$</td> <td>$(0.69, 0.72)$</td> <td>$(0.25)$</td> <td>$(0.40, 0.50)$</td> <td></td> </tr> <tr> <td>$S_{33}$</td> <td>$(0.21, 0.64)$</td> <td>$(0.17, 0.61)$</td> <td>$(0.0)$</td> <td>$(0)$</td> <td></td> </tr> <tr> <td>$S_{34}$</td> <td>$(0.83, 0.85)$</td> <td>$(0.83, 0.85)$</td> <td>$(0.50, 0.50)$</td> <td>$(0.50, 0.50)$</td> <td></td> </tr> </tbody> </table> Table 2 shows the matching degrees between each service $S_{ji}$ from Table 1 and its corresponding part $q_j$ of $Q_1$. The service $S_{11}$ covering the part $q_1$ does not have a matching degree as there are no constraints imposed by the user on $q_1$. However, each service covering the part $q_2$ is associated with four (the number of methods) degrees. Each matching degree is formulated as a pair of real values within the range $[0, 1]$, where the first and second values are the matching degrees of the constraints $price$ and $warranty$, respectively. Similarly, for the matching degrees of the services covering $q_3$, the first and second values represent respectively the inclusion degrees of the constraints $power$ and $consumption$. 3.4 Problem statement Given a preference query $Q : - < q_1, ..., q_n >$. Each part $q_j$ is a tuple $(\pi_j, P_{q_j})$, where $\pi_j$ represents $q_j$ without its preferences $P_{q_j}$. Given a set of services classes $S = \{S_1, ..., S_n\}$, where a class $S_j$ regroups the services that are relevant to the part $q_j$ and given a set $M = \{m_1, ..., m_n\}$ of matching methods. The problem is how to rank services in each class $S_j$ to select the most relevant ones and how to rank generated compositions to select the top-$k$ ones. 4 Fuzzy dominance and fuzzy scores 4.1 Dominances: Pareto vs Fuzzy Services of the same class $S_j$ have the same functionality, they only differ in terms of constraints, providing thus different matching degrees. Individual matching degrees of services could be aggregated. One method is to assign weights to individual degrees and, for instance, compute a weighted average of degrees. In doing so, users may not know enough to make trade-offs between different relevancies using numbers (average degrees). Users thus lose the flexibility to select their desired answers by themselves. Computing the skyline [3] is as a natural solution to overcome this limitation. The skyline consists of the set of points which are not dominated by any other point. **Definition 1** (Pareto dominance) Let $u$ and $v$ be two $d$-dimensional points. We say that $u$ dominates $v$, denoted by $u \succ v$, iff $\forall i \in [1, d], u_i \geq v_i \land \exists k \in [1, d], u_k > v_k$. Pareto dominance is not always significant to rank-order points. To illustrate this situation, let $u = (u_1, u_2) = (1, 0)$ and $v = (v_1, v_2) = (0.9, 1)$ be two matching degrees. In Pareto order, $u$ and $v$ are incomparable. However, one can consider that $v$ is better than $u$ since $v_2 = 1$ is too much higher than $u_2 = 0$, contrariwise $v_1 = 0.9$ is almost close to $u_1 = 1$. It is thus interesting to fuzzify the dominance relationship to express the extent to which a matching degree (more or less) dominates another one. We define below a fuzzy dominance that relies on particular membership function of a graded inequality of the type strongly larger than. **Definition 2** (fuzzy dominance) Given two $d$-dimensional points $u$ and $v$, we define the fuzzy dominance to express the extent to which $u$ dominates $v$ such as: $$\text{deg}(u \succ v) = \frac{\sum_{i=1}^{d} \mu_{\succ}(u_i, v_i)}{d}$$ Where $\mu_{\succ}(u_i, v_i)$ expresses the extent to which $u_i$ is more or less (strongly) greater than $v_i$. $\mu_{\succ}$ is defined as: $$\mu_{\succ}(x, y) = \begin{cases} 0 & \text{if } x - y \leq \varepsilon \\ 1 & \text{if } x - y \geq \lambda + \varepsilon \\ \frac{x - y}{\lambda} & \text{otherwise} \end{cases}$$ Where $\lambda > 0$, i.e., $\mu_{\succ}$ is more demanding than the idea of “strictly greater” and $\varepsilon \geq 0$ in order to ensure that $\mu_{\succ}$ agrees with the idea of “greater” in the usual sense. The semantics of $\mu_{\succ}$ is as follows: if $x - y$ is less than $\varepsilon$, then $x$ is not at all strongly greater than $y$; if $x - y$ is larger than $\lambda + \varepsilon$, then $x$ is all much greater than $y$; if $x - y$ is between $\varepsilon$ and $\lambda$, then $x$ is much greater than $y$ is a matter of degree. Let us reconsider the previous instances $u = (1, 0)$, $v = (0.9, 1)$. With $\varepsilon = 0$ and $\lambda = 0.2$, we have $\text{deg}(u \succ v) = 0.25$ and $\text{deg}(v \succ u) = 0.5$. This is more significant than $u$ and $v$ are incomparable provided by Pareto dominance. In the following sections, we use the defined fuzzy dominance to compute scores of services and compositions. 4.2 Associating score with a service Under a single matching degree, the dominance relationship is unambiguous. When multiple methods are applied, resulting in different matching degrees for the same constraints, the dominance relationship becomes uncertain. The model proposed in [8], namely probabilistic skyline overcomes this problem. Contrariwise, Skoutas et al. show in [10] the limitations of the probabilistic skyline to rank services and introduce the Pareto dominating score of individual services. We generalize this score to fuzzy dominance and propose a fuzzy dominating score ($FDS$). An $FDS$ of a service $S_{ji}$ indicates the average extent to which $S_{ji}$ dominates the whole services of its class $S_j$. **Definition 3** (Fuzzy dominating score) The fuzzy dominating score of a service $S_{ji}$ in its class $S_j$ is defined as: $$FDS(S_{ji}) = \frac{1}{|S_j| - 1} \left(1 - \frac{1}{|M|} \sum_{h=1}^{d} \sum_{k \neq i} \sum_{j \in |S_j|} \sum_{i \in |S_i|} \text{deg}(S_{ji}^h, S_{jk})\right)$$ where $S_{ji}^h$ is the matching degree of $S_{ji}$ obtained by applying the $h^{th}$ method. The term $(|S_j| - 1)$ is used to normalize the $FDS$ in the range $[0, 1]$. Table 3 shows the fuzzy dominating scores of the services of our example. 4.3 Associating score with a composition Different compositions can be generated from different classes. To rank such compositions, we extend the previous $FDS$ definition to composition and associate each one with an $FDS$ as an aggregation of different $FDS$s of its component services. Let $C = \{S_{i1}, ..., S_{in}\}$ be a composition of $n$ services and $d = d_1 + ... + d_n$ be the number of preferences, where $d_j$ is the number of constraints involved in the service $S_{ji}$. The $FDS$ of $C$ is then computed as follows: $$FDS(C) = \frac{1}{d} \sum_{j=1}^{n} d_j \cdot FDS(S_{ji})$$ 5 Top-k service composition 5.1 Efficient generation A straightforward method to find the top-k compositions that answer a query is to generate all possible compositions, compute their scores, and return the top-k ones. However, this approach results in a high computational cost, as it needs to generate all possible compositions, whereas, most of them are not in the top-k. The following theorem provides an optimization technique to find quickly the top-k compositions: the top-k services of the different service classes are sufficient to compute the top-k compositions. **Theorem 1** Let $C = \{S_{i1}, ..., S_{i_n}\}$ be a composition and top-k-$S_j$ (resp. top-k-$C$) be the top-k services of the class $S_j$ (resp. the top-k compositions). Then, $\exists S_{j1}, S_{j2} \in C; S_{j1} \notin$ top-k-$S_j$ $\implies C \notin$ top-k-$C$. Table 3 shows the top-k ($k=2$) services in each service class using the FDS. Thus, relevant services that are not in the top-k of their classes are eliminated. They are crossed out in Table 3. The other services are retained. The top-k compositions are generated from the different top-k-$S_j$ classes. Table 4 shows the possible compositions along with their scores and the top-k compositions of our example. **Table 3. Top-k services** <table> <thead> <tr> <th>Services</th> <th>Class</th> <th>Score</th> <th>Top-k</th> </tr> </thead> <tbody> <tr> <td>$S_{i1}$</td> <td>$S_1$</td> <td>-</td> <td>$S_{i1}$</td> </tr> <tr> <td>$S_{i2}$</td> <td>$S_2$</td> <td>0.527</td> <td>$S_{i2}$</td> </tr> <tr> <td>$S_{i3}$</td> <td>$S_3$</td> <td>0.657</td> <td>$S_{i3}$</td> </tr> <tr> <td>$S_{i4}$</td> <td>$S_4$</td> <td>0.027</td> <td>$S_{i4}$</td> </tr> <tr> <td>$S_{i5}$</td> <td>$S_5$</td> <td>0.345</td> <td>$S_{i5}$</td> </tr> <tr> <td>$S_{i6}$</td> <td>$S_6$</td> <td>0.083</td> <td>$S_{i6}$</td> </tr> <tr> <td>$S_{i7}$</td> <td>$S_7$</td> <td>0.573</td> <td>$S_{i7}$</td> </tr> <tr> <td>$S_{i8}$</td> <td>$S_8$</td> <td>0.187</td> <td>$S_{i8}$</td> </tr> <tr> <td>$S_{i9}$</td> <td>$S_9$</td> <td>0.717</td> <td>$S_{i9}$</td> </tr> </tbody> </table> **Algorithm 1: TKSC** | Input: $Q$ preference query; $S$ set of service classes; $k \in \mathbb{N}$; $M = \{m_1, ..., m_n\}$ set of methods, $\varepsilon \geq 0$; $\lambda > 0$; | |-------|-------| | 1 foreach $S_i$ in $S$ do | | 2 $R \leftarrow random(S_i, 1)$; | | 3 if $\exists q_j \in Q; cover(S, q_j)$ then | | 4 $R \leftarrow R \cup S_i$; | | 5 if $P_{q_j} \neq \emptyset$ then | | 6 foreach $S_j \in S_j$ do | | 7 foreach $m$ in $M$ do | | 8 ComputeSScore($C_{j1}, P_{q_j}, m$); | | 9 return $top(k, C)$; | **Table 4. Top-k composition** <table> <thead> <tr> <th>Compositions</th> <th>Score</th> <th>Top-k</th> </tr> </thead> <tbody> <tr> <td>$C_1 = {S_{i1}, S_{i2}, S_{i3}}$</td> <td>0.615</td> <td>$C_2$</td> </tr> <tr> <td>$C_2 = {S_{i1}, S_{i2}, S_{i3}}$</td> <td>0.687</td> <td>$C_3$</td> </tr> <tr> <td>$C_3 = {S_{i1}, S_{i2}, S_{i3}}$</td> <td>0.353</td> <td>$C_4$</td> </tr> <tr> <td>$C_4 = {S_{i1}, S_{i2}, S_{i3}}$</td> <td>0.625</td> <td>$C_5$</td> </tr> </tbody> </table> 5.2 Top-k compositions algorithm The algorithm, hereafter referred to as TKSC, computes the top-k compositions according to the fuzzy scores. It proceeds as follows. **Step 1** compute the matching degrees (lines 1-8). Each class whose services cover a query part is added into the list of relevant classes. If its services touch the user preferences, we compute its different matching degrees according to the number of methods. **Step 2** eliminating less relevant services (lines 9-15). For each class whose services do not touch the user preferences, we select randomly $k$ services since they are all equal with respect to user preferences. Otherwise, i.e., its services touch the user preferences, we first compute the score of its services and then retain only the top-k ones. **Step 3** returning top-k compositions (lines 16-19). We first compose the retained services, i.e., the top-k of each class, then we compute the score of generated compositions. Finally we provide the user with the top-k ones. 5.3 Diversity-aware top-k compositions Similar services could exist in each class $S_i$, leading to similar top-k compositions. A little variety in the top-k compositions list will probably lead to user frustration. Diversification is then needed to improve the quality of the top-k compositions. We tackle this issue by proposing a method for maximizing the diversity of compositions while maintaining an acceptable accuracy (expressed in terms of FDS) of compositions. We propose to diversify the top-k compositions by firstly diversifying the top-k services of each class \(S_j\), by diversifying the compositions themselves. The diversity of the top-\(k\) of a class \(S_j\) means that the services it includes should be dissimilar amongst each other. A principled way to improving diversity while maintaining accuracy, is to explicitly use both diversity and accuracy of during the top-\(k\) services selection. We use the following quality metric that combines diversity and accuracy: \[ \text{Quality}(S_j) = FDS(S_j) \times \text{RelDiv}(S_j, dtopkS_j) \] The quality of a service \(S_{ji}\) in its class \(S_j\) is proportional to its accuracy w.r.t. \(FDS\) and to its relative diversity to those diversified top-\(k\) services so far selected \(dtopkS_j\). Initially, \(dtopkS_j\) is an empty set, and its first element will be necessary one of the services with higher \(FDS\). The relative diversity of a service \(S_{ji}\) to the current set \(dtopkS_j\) is defined as the average dissimilarity between \(S_{ji}\), and the so far selected service [6] as described in the following equation: \[ \text{RelDiv}(S_{ji}, dtopkS_j) = \frac{\sum_{S_{jr} \in dtopkS_j} \text{Dist}(S_{ji}, S_{jr})}{|dkS_j|} \] The relative diversity of a service \(S_{ji}\) to an initial empty set, i.e., \(dtopkS_j = 0\), is set to 1. The quantity \(\text{Dist}(S_{ji}, S_{jr})\) represents the distance (i.e., dissimilarity) between the services \(S_{ji}\) and \(S_{jr}\). Recall that Data services of the same class have the same functionality and only differ in their constraints, therefore the distance can be then reduced to the distance between their constraints. Given two services \(S_{ji}\) and \(S_{jr}\) in \(S_j\) having the constraints \(C_{ji} \equiv x_1 \in E_1, ..., x_d \in E_d\), and \(C_{jr} \equiv x_1 \in F_1, ..., x_d \in F_d\), respectively, where \(d\) is the number of constraints involved in the services \(S_{ji}\) and \(S_{jr}\). The distance between \(S_{ji}\) and \(S_{jr}\) can be measured by \(\text{Dist}(S_{ji}, S_{jr}) = \max_{h \in \{1, \ldots, d\}} \text{Dist}(E_h, F_h)\), where \(\text{Dist}(E_h, F_h) = \max_{x \in X} |\mu_E(x) - \mu_F(x)|\) is the distance between the fuzzy sets \(E_h\) and \(F_h\). Diversified top-\(k\) service compositions computing: The top-\(k\) compositions set is diversified by diversifying its component compositions and maintaining acceptable compositions scores. The \(\text{Quality}\) of a composition \(C\) is an aggregation of different \(\text{Qualities}\) of its component services. Let \(C = \{S_{ji1}, \ldots, S_{jin}\}\) be a composition of \(n\) services and \(d = d_1 + \ldots + d_n\) be the number of preferences, where \(d_i\) is the number of constraints involved in the service \(S_{ji}\). The \(\text{Quality}\) of \(C\) is then computed as follows: \[ \text{Quality}(C) = \frac{1}{d} \sum_{j=1}^{n} d_j \cdot \text{Quality}(S_{ji}) \] The diversified top-\(k\) compositions (\(DTKSC\)) is obtained from \(TKSC\) by applying the following modifications: **line 15**: instead of taking the top-\(k\) services in each class based on their scores, we take them based on their qualities, i.e., we take the diversified top-\(k\) ones, by applying Algorithm 2. Line 15 becomes: \(\text{top-k}.S_j \leftarrow DTKS(k, s, S_j);\) **line 18**: we compute the quality of compositions, instead of their scores. This line becomes: \(\text{computeQQuality}(C);\) **line 19**: instead of returning the top-\(k\) compositions, i.e., the top-\(k\) with the highest scores, we return the diversified top-\(k\) ones, i.e., the ones having the best Qualities. So the line 19 becomes: \(\text{return} Dtop(k, C);\) ### 6 Architecture and experimental evaluation #### 6.1 System Architecture In this section we outline the basic components of our system, their roles and how they interact with each other. A high-level architecture of our system is illustrated in Figure 2. The system consists of the following components: - **The Fuzzy Membership Functions Manager** is useful to manage fuzzy linguistic terms. It enables users and service providers to define their desired fuzzy terms along with their membership functions. The defined terms are stored in a local fuzzy terms knowledge base which can be shared by users, and are linked to their implementing Web services. This link describes a set of fuzzy terms and their implementing Web services. Users and providers can directly test the proposed membership functions and use the associated fuzzy terms. For each fuzzy term we provide a shape that gives a graphical representation of the associated membership function, a form that helps users to compute the degree \[\text{http://vm.liris.cnrs.fr:36880/FuzzyTerms/}\] to which a given value is in the fuzzy set of the considered fuzzy term, and a WSDL description of the Web service that implements the membership function. The Service Annotator allows providers to annotate WSDL description files of services with fuzzy terms to represent the services constraints and with SPARQL queries expressed over a domain ontology to represent the semantic definition of the service functionality in the form of RDF graph. This annotation is implemented by adding a new element called “rdfQuery” to the XML Schema of WSDL as in WSDL-S approach. The WSDL files are then published on a service registry. The ontology manager uses Jena API to manage domain ontology (i.e., to add/delete concepts). The Preference Query Formulator provides users with a GUI implemented with Java Swing to interactively formulate their queries over a domain ontology. Users are not required to know any specific ontology query languages to express their queries. The Top-k Service Compositions consists of five components. The RDF Query Rewriter implements an RDF query rewriting algorithm [2] to identify the relevant services that match (some parts of) a user query. For that purpose, it exploits the service annotation. The Service Locator feeds the Query Rewriter with services that most likely match a given query. The Top-k Compositions component computes (i) the matching degrees of relevant services, (ii) the fuzzy dominating scores of relevant services, (iii) the top-k services of each relevant service class and (iv) the fuzzy compositions scores to return the top-k compositions. The diversification-aware Top-k Compositions component implements the proposed quality metric to compute a diversified top-k service composition. The (diversified) top-k service compositions are then translated by the composition plan generator into execution plans expressed in the XPDL language. They are executed by a workflow execution engine; we use the Saravasvi execution engine from Google. 6.2 Experimental evaluation This section presents an extensive experimental study of our approach. Our objective is to prove the efficiency and the scalability of our proposed Top-k service composition algorithms as the number of the considered services increases. For this purpose, we implemented a Web service generator. The generator takes as input a set of (real-life) model services (each representing a class of services) and their associated fuzzy constraints and produces for each model service a set of synthetic Web services and their associated synthetic fuzzy constraints. In the experiments we evaluated the effects of the following parameters: (i) the number of services per class, (ii) the service classes number, (iii) the number of fuzzy constraints per class, (iv) the number of matching methods and (v) the effects of $\varepsilon$ and $\lambda$. The algorithms $TK_{SC}$ and $DTK_{SC}$ were implemented in Java and the experiments were conducted on a Pentium D 2:4GHz with 2GB of RAM, running Windows XP. ![Figure 3. Performance results](image) **Performance vs number of services per class:** we varied the number of services per class from 100 to 1000. Figure 3-(a) show that our framework can handle hundreds of services in a reasonable time. The results show also that computing the diversified top-k composition introduces an insignificant cost when the factor $s$ is small (e.g., $s = s_1$). **Performance vs number of classes:** We varied the classes number from 1 to 6. Figure 3-(b) show that the execution time is proportional to the classes number. **Performance vs number of constraints per service:** We varied the fuzzy constraints number from 1 to 10. Figure 3-(c) shows that when the factor $s$ is small (e.g., $s = s_1$) the cost incurred in computing the diversified top-k compositions is insignificant as the constraints number increases. **Performance vs number of matching methods:** we varied the number of matching methods from 1 to 10. The results of this experiment are shown in Figure 3-(d). Once again the cost incurred in computing the diversified top-k compositions remains insignificant as the methods number increases if the factor \( s \) has a reasonable value (e.g., \( s = s_1 \)). The effects of \( \varepsilon \) and \( \lambda \): varying \( \varepsilon \) and \( \lambda \) change the scores/qualities for the top-k diversified-top-k compositions. This may consequently lead to the inclusion or to the exclusion of a composition from top-k diversified-top-k compositions. Table 5 shows the top-k diversified-top-k compositions for different values of \( \varepsilon \) and \( \lambda \); the higher the values of these parameters are the higher the global diversity of the diversified top-k compositions is. The global diversity of the diversified top-k compositions set described in the following equation is the average of the diversities between each couple of compositions in the compositions set: \[ \text{div}(\text{top} - k) = \frac{\sum_{i=1}^{k} \sum_{j=i+1}^{k} \text{div}(C_i, C_j)}{(k^2 - k)/2}, \] where \( \text{div}(C_i, C_j) = \text{Dist}(C_i, C_j) \). Note that the global diversity of the diversified top-k compositions is always higher than that of the top-k compositions. ### Table 5. Effects of \((\varepsilon, \lambda)\) <table> <thead> <tr> <th>((\varepsilon, \lambda))</th> <th>Top-k Compositions</th> <th>Diversified Top-k Compositions</th> </tr> </thead> <tbody> <tr> <td></td> <td>Component Services</td> <td>Score</td> </tr> <tr> <td>(0.002, 0.05)</td> <td>(S_{1318}, S_{2202}, S_{3154}, S_{4154})</td> <td>0.74703556</td> </tr> <tr> <td></td> <td>(S_{1318}, S_{3154}, S_{4154})</td> <td>0.741032</td> </tr> <tr> <td>(0.02, 0.2)</td> <td>(S_{1318}, S_{2202}, S_{3154}, S_{4154})</td> <td>0.59373885</td> </tr> <tr> <td></td> <td>(S_{1318}, S_{2202}, S_{3154}, S_{4154})</td> <td>0.5531574</td> </tr> <tr> <td>(0.1, 0.3)</td> <td>(S_{1318}, S_{2202}, S_{3154}, S_{4154})</td> <td>0.5312762</td> </tr> </tbody> </table> ### 7 Conclusions In this paper, we proposed an approach to compute the top-k Data service compositions for answering fuzzy preference queries. We introduced the concept of **fuzzy dominance relationship** to measure to what extent a service (represented by its vector of matching degrees) dominates another one. This new concept allowed us to rank-order candidate services in their respective classes and compositions to compute the top-k ones. We propose also a method to improve the diversity of returned compositions while maintaining as possible the compositions with the highest scores. Further, we developed and evaluated suitable algorithms for computing the top-k diversified-top-k compositions. As a future work, we intend to apply the proposed fuzzy approach to top-k QoS-based service composition. ### References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00670706/file/4462a144.pdf", "len_cl100k_base": 10267, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 38845, "total-output-tokens": 11815, "length": "2e13", "weborganizer": {"__label__adult": 0.00028705596923828125, "__label__art_design": 0.0004687309265136719, "__label__crime_law": 0.000354766845703125, "__label__education_jobs": 0.0008554458618164062, "__label__entertainment": 9.465217590332033e-05, "__label__fashion_beauty": 0.00014722347259521484, "__label__finance_business": 0.0008244514465332031, "__label__food_dining": 0.0003666877746582031, "__label__games": 0.00047850608825683594, "__label__hardware": 0.0005807876586914062, "__label__health": 0.0005216598510742188, "__label__history": 0.0002601146697998047, "__label__home_hobbies": 7.730722427368164e-05, "__label__industrial": 0.0003654956817626953, "__label__literature": 0.00036406517028808594, "__label__politics": 0.00031566619873046875, "__label__religion": 0.0003566741943359375, "__label__science_tech": 0.053802490234375, "__label__social_life": 0.0001118779182434082, "__label__software": 0.0215606689453125, "__label__software_dev": 0.9169921875, "__label__sports_fitness": 0.00017344951629638672, "__label__transportation": 0.00039768218994140625, "__label__travel": 0.00020194053649902344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40007, 0.05538]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40007, 0.29904]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40007, 0.85904]], "google_gemma-3-12b-it_contains_pii": [[0, 1047, false], [1047, 5095, null], [5095, 10793, null], [10793, 16135, null], [16135, 21701, null], [21701, 25841, null], [25841, 30525, null], [30525, 34284, null], [34284, 40007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1047, true], [1047, 5095, null], [5095, 10793, null], [10793, 16135, null], [16135, 21701, null], [21701, 25841, null], [25841, 30525, null], [30525, 34284, null], [34284, 40007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40007, null]], "pdf_page_numbers": [[0, 1047, 1], [1047, 5095, 2], [5095, 10793, 3], [10793, 16135, 4], [16135, 21701, 5], [21701, 25841, 6], [25841, 30525, 7], [30525, 34284, 8], [34284, 40007, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40007, 0.25333]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
61f47a0d6a04e2e4699d91250dbbb85fd03b0566
[REMOVED]
{"Source-Url": "http://eudl.eu/pdf/10.1007/978-3-319-05452-0_1", "len_cl100k_base": 8869, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 38892, "total-output-tokens": 11433, "length": "2e13", "weborganizer": {"__label__adult": 0.0004405975341796875, "__label__art_design": 0.0007429122924804688, "__label__crime_law": 0.0003919601440429687, "__label__education_jobs": 0.0026493072509765625, "__label__entertainment": 0.00012004375457763672, "__label__fashion_beauty": 0.0003185272216796875, "__label__finance_business": 0.00047469139099121094, "__label__food_dining": 0.0004477500915527344, "__label__games": 0.0010137557983398438, "__label__hardware": 0.003849029541015625, "__label__health": 0.0008945465087890625, "__label__history": 0.0006427764892578125, "__label__home_hobbies": 0.0001883506774902344, "__label__industrial": 0.0006809234619140625, "__label__literature": 0.00030422210693359375, "__label__politics": 0.0003707408905029297, "__label__religion": 0.000446319580078125, "__label__science_tech": 0.2252197265625, "__label__social_life": 0.00014483928680419922, "__label__software": 0.019561767578125, "__label__software_dev": 0.73876953125, "__label__sports_fitness": 0.0004906654357910156, "__label__transportation": 0.0017261505126953125, "__label__travel": 0.0002694129943847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48590, 0.03402]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48590, 0.12743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48590, 0.91458]], "google_gemma-3-12b-it_contains_pii": [[0, 2383, false], [2383, 5574, null], [5574, 8784, null], [8784, 11656, null], [11656, 13598, null], [13598, 15453, null], [15453, 18323, null], [18323, 19483, null], [19483, 22616, null], [22616, 25725, null], [25725, 28918, null], [28918, 31800, null], [31800, 33869, null], [33869, 37062, null], [37062, 40307, null], [40307, 43126, null], [43126, 46321, null], [46321, 48590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2383, true], [2383, 5574, null], [5574, 8784, null], [8784, 11656, null], [11656, 13598, null], [13598, 15453, null], [15453, 18323, null], [18323, 19483, null], [19483, 22616, null], [22616, 25725, null], [25725, 28918, null], [28918, 31800, null], [31800, 33869, null], [33869, 37062, null], [37062, 40307, null], [40307, 43126, null], [43126, 46321, null], [46321, 48590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48590, null]], "pdf_page_numbers": [[0, 2383, 1], [2383, 5574, 2], [5574, 8784, 3], [8784, 11656, 4], [11656, 13598, 5], [13598, 15453, 6], [15453, 18323, 7], [18323, 19483, 8], [19483, 22616, 9], [22616, 25725, 10], [25725, 28918, 11], [28918, 31800, 12], [31800, 33869, 13], [33869, 37062, 14], [37062, 40307, 15], [40307, 43126, 16], [43126, 46321, 17], [46321, 48590, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48590, 0.16585]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
c31eefe5e20c96a9671b1d2c8f913155b4686aa3
Module I Basic elements in C++ Objectives • In this topic, you will: – Become familiar with functions, special symbols, and identifiers in C++ – Explore simple data types – Discover how a program evaluates arithmetic expressions – Learn about assignment statements This is a C++ program ```cpp int main () { } ``` #include <iostream> using namespace std; const double PI = 3.14159; int main () { int radius; cout << "Enter a radius "; cin >> radius; double area = PI * radius * radius; double circumference = 2 * PI * radius; cout << "Area is " << area << endl; cout << "Circumference is " << circumference << endl; } A simple program in C++ We can declare at the point when we need them Constant in C++ Input in C++ Note the declarations Output in C++ The Basics of a C++ Program • Function (or subprogram): collection of statements; when executed, accomplishes something – May be predefined or standard • Syntax rules: rules that specify which statements (instructions) are legal or valid • Semantic rules: determine the meaning of the instructions • Programming language: a set of rules, symbols, and special words Comments • Comments are for the reader, not the compiler • Two types: – Single line: begin with // // This is a C++ program. // Welcome to C++ Programming. – Multiple line: enclosed between /* and */ /* You can include comments that can occupy several lines. */ Special Symbols • **Token**: the smallest individual unit of a program written in any language • C++ tokens include special symbols, word symbols, and identifiers • Special symbols in C++ include: Reserved Words (Keywords) • **Reserved word symbols** (or keywords): – Cannot be redefined within program – Cannot be used for anything other than their intended use Examples: – int – float – double – char – const – void – return Identifiers • **Identifier**: the name of something that appears in a program – Consists of letters, digits, and the underscore character (\_) – Must begin with a letter or underscore • C++ is case sensitive – `NUMBER` is not the same as `number` • Two predefined identifiers are `cout` and `cin` • Unlike reserved words, predefined identifiers may be redefined, but it is not a good idea Data Types - **Data type**: set of values together with a set of operations - C++ data types fall into three categories: - Simple data type - Structured data type - Pointers Simple Data Types - Three categories of simple data - Integral: integers (numbers without a decimal) - Can be further categorized: - `char, short, int, long, bool, unsigned char, unsigned short, unsigned int, unsigned long` - Floating-point: decimal numbers - Enumeration type: user-defined data type Different compilers may allow different ranges of values **int Data Type** - **Examples:** - 6728 - 0 - 78 - +763 - **Cannot use a comma within an integer** - Commas are only used for separating items in a list **bool Data Type** - **bool** type - Two values: `true` and `false` - Manipulate logical (Boolean) expressions - `true` and `false` - Logical values - `bool`, `true`, `and`, `false` - Reserved words **char Data Type** - The smallest integral data type - Used for single characters: letters, digits, and special symbols - Each character is enclosed in single quotes - 'A', 'a', '0', '*', '+', '$', '&' - A blank space is a character - Written ' ', with a space left between the single quotes char Data Type (cont’d.) - Different character data sets exist - ASCII: American Standard Code for Information Interchange - Each of 128 values in ASCII code set represents a different character - Characters have a predefined ordering based on the ASCII numeric value - Collating sequence: ordering of characters based on the character set code Floating-Point Data Types - C++ uses scientific notation to represent real numbers (floating-point notation) <table> <thead> <tr> <th>Decimal Number</th> <th>Scientific Notation</th> <th>C++ Floating-Point Notation</th> </tr> </thead> <tbody> <tr> <td>75.924</td> <td>$7.5924 \times 10^1$</td> <td>7.592400E1</td> </tr> <tr> <td>0.18</td> <td>$1.8 \times 10^{-1}$</td> <td>1.800000E-1</td> </tr> <tr> <td>0.00000453</td> <td>$4.53 \times 10^{-5}$</td> <td>4.530000E-5</td> </tr> <tr> <td>-1.482</td> <td>$-1.482 \times 10^0$</td> <td>-1.482000E0</td> </tr> <tr> <td>7800.0</td> <td>$7.8 \times 10^3$</td> <td>7.800000E3</td> </tr> </tbody> </table> Floating-Point Data Types (cont’d.) - **float**: represents any real number - Range: -3.4E+38 to 3.4E+38 (four bytes) - **double**: represents any real number - Range: -1.7E+308 to 1.7E+308 (eight bytes) - Minimum and maximum values of data types are system dependent Data Types and Variables - To declare a variable, must specify the data type it will store - Syntax: `dataType identifier;` - Examples: ``` int counter; double interestRate; char grade; ``` Arithmetic Operators, Operator Precedence, and Expressions - C++ arithmetic operators: - + addition - – subtraction - * multiplication - / division - % modulus (or remainder) operator - +, –, *, and / can be used with integral and floating-point data types - Use % only with integral data types Expressions - **Integral expression**: all operands are integers - Yields an integral result - Example: $2 + 3 \times 5$ - **Floating-point expression**: all operands are floating-point - Yields a floating-point result - Example: $12.8 \times 17.5 - 34.50$ Mixed Expressions • **Mixed expression:** – Has operands of different data types – Contains integers and floating-point • **Examples of mixed expressions:** 2 + 3.5 6 / 4 + 3.9 5.4 * 2 - 13.6 + 18 / 2 Type Conversion (Casting) - **Implicit type conversion**: when value of one type is automatically changed to another type - **Cast operator**: provides explicit type conversion ```cpp static_cast<dataTypeName>(expression) ``` ### Type Conversion (cont’d.) #### Example 2-9 <table> <thead> <tr> <th>Expression</th> <th>Evaluates to</th> </tr> </thead> <tbody> <tr> <td><code>static_cast&lt;int&gt;(7.9)</code></td> <td>7</td> </tr> <tr> <td><code>static_cast&lt;int&gt;(3.3)</code></td> <td>3</td> </tr> <tr> <td><code>static_cast&lt;double&gt;(25)</code></td> <td>25.0</td> </tr> <tr> <td><code>static_cast&lt;double&gt;(5+3)</code></td> <td>= <code>static_cast&lt;double&gt;(8)</code> = 8.0</td> </tr> <tr> <td><code>static_cast&lt;double&gt;(15)/2</code></td> <td>= 15.0/2</td> </tr> <tr> <td></td> <td>(because <code>static_cast&lt;double&gt;(15)</code> = 15.0)</td> </tr> <tr> <td></td> <td>= 15.0/2.0 = 7.5</td> </tr> <tr> <td><code>static_cast&lt;double&gt;(15/2)</code></td> <td>= <code>static_cast&lt;double&gt;(7)</code> (because 15/2 = 7)</td> </tr> <tr> <td></td> <td>= 7.0</td> </tr> <tr> <td><code>static_cast&lt;int&gt;(7.8 + static_cast&lt;double&gt;(15)/2)</code></td> <td>= <code>static_cast&lt;int&gt;(7.8 + 7.5)</code></td> </tr> <tr> <td></td> <td>= <code>static_cast&lt;int&gt;(15.3)</code></td> </tr> <tr> <td></td> <td>= 15</td> </tr> <tr> <td><code>static_cast&lt;int&gt;(7.8 + static_cast&lt;double&gt;(15/2))</code></td> <td>= <code>static_cast&lt;int&gt;(7.8 + 7.0)</code></td> </tr> <tr> <td></td> <td>= <code>static_cast&lt;int&gt;(14.8)</code></td> </tr> <tr> <td></td> <td>= 14</td> </tr> </tbody> </table> Variables, Assignment Statements, and Input Statements • Data must be loaded into main memory before it can be manipulated • Storing data in memory is a two-step process: – Instruct computer to allocate memory – Include statements to put data into memory Allocating Memory with Constants and Variables - **Named constant**: memory location whose content can’t change during execution - **Syntax to declare a named constant**: ``` const dataType identifier = value; ``` - In C++, `const` is a reserved word **EXAMPLE 2-11** Consider the following C++ statements: ```c++ const double CONVERSION = 2.54; const int NO_OF_STUDENTS = 20; const char BLANK = ' '; ``` Allocating Memory with Constants and Variables (cont’d.) - **Variable**: memory location whose content may change during execution - Syntax to declare a named constant: ```cpp dataType identifier, identifier, ...; ``` **EXAMPLE 2-12** Consider the following statements: ```cpp double amountDue; int counter; char ch; int x, y; string name; ``` Putting Data into Variables • Ways to place data into a variable: – Use C++’s assignment statement – Use input (read) statements Assignment Statement • The assignment statement takes the form: ``` variable = expression; ``` • Expression is evaluated and its value is assigned to the variable on the left side • A variable is said to be initialized the first time a value is placed into it • In C++, = is called the assignment operator EXAMPLE 2-13 Suppose you have the following variable declarations: ```cpp int num1, num2; double sale; char first; string str; ``` Now consider the following assignment statements: ```cpp num1 = 4; num2 = 4 * 5 - 11; sale = 0.02 * 1000; first = 'D'; str = "It is a sunny day." ``` Declaring & Initializing Variables • Not all types of variables are initialized automatically • Variables can be initialized when declared: ``` int first=13, second=10; char ch=' '; double x=12.6; ``` • All variables must be initialized before they are used – But not necessarily during declaration Input (Read) Statement - **cin** is used with `>>` to gather input ``` cin >> variable >> variable ...; ``` - This is called an **input (read)** statement - The **stream extraction operator** is `>>` - For example, if miles is a double variable ``` cin >> miles; ``` - Causes computer to get a value of type **double** and places it in the variable **miles** • Using more than one variable in `cin` allows more than one value to be read at a time • Example: if `feet` and `inches` are variables of type `int`, this statement: ```c++ cin >> feet >> inches; ``` – Inputs two integers from the keyboard – Places them in variables `feet` and `inches` respectively // This program illustrates how input statements work. #include <iostream> using namespace std; int main() { int feet; int inches; cout << "Enter two integers separated by one or more spaces: "; cin >> feet >> inches; cout << endl; cout << "Feet = " << feet << endl; cout << "Inches = " << inches << endl; return 0; } Sample Run: In this sample run, the user input is shaded. Enter two integers separated by one or more spaces: 23 7 Feet = 23 Inches = 7 Increment and Decrement Operators • Increment operator: increase variable by 1 – Pre-increment: `++variable` – Post-increment: `variable++` • Decrement operator: decrease variable by 1 – Pre-decrement: `--variable` – Post-decrement: `variable--` • What is the difference between the following? \[ \begin{align*} x & = 5; \\ y & = ++x; \end{align*} \quad \begin{align*} x & = 5; \\ y & = x++;\end{align*} \] Output • The syntax of `cout` and `<<` is: ``` cout << expression or manipulator << expression or manipulator...; ``` – Called an output statement • The stream insertion operator is `<<` • Expression evaluated and its value is printed at the current cursor position on the screen Output (cont’d.) • A manipulator is used to format the output – Example: `endl` causes insertion point to move to beginning of next line ``` EXAMPLE 2-21 Consider the following statements. The output is shown to the right of each statement. <table> <thead> <tr> <th>Statement</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td><code>cout &lt;&lt; 29 / 4 &lt;&lt; endl;</code></td> <td>7</td> </tr> <tr> <td><code>cout &lt;&lt; &quot;Hello there.&quot; &lt;&lt; endl;</code></td> <td>Hello there.</td> </tr> <tr> <td><code>cout &lt;&lt; 12 &lt;&lt; endl;</code></td> <td>12</td> </tr> <tr> <td><code>cout &lt;&lt; &quot;4 + 7&quot; &lt;&lt; endl;</code></td> <td>4 + 7</td> </tr> <tr> <td><code>cout &lt;&lt; 4 + 7 &lt;&lt; endl;</code></td> <td>11</td> </tr> <tr> <td><code>cout &lt;&lt; 'A' &lt;&lt; endl;</code></td> <td>A</td> </tr> <tr> <td><code>cout &lt;&lt; &quot;4 + 7 = &quot; &lt;&lt; 4 + 7 &lt;&lt; endl;</code></td> <td>4 + 7 = 11</td> </tr> <tr> <td><code>cout &lt;&lt; 2 + 3 * 5 &lt;&lt; endl;</code></td> <td>17</td> </tr> <tr> <td><code>cout &lt;&lt; &quot;Hello \n there.&quot; &lt;&lt; endl;</code></td> <td>Hello there.</td> </tr> </tbody> </table> ``` Output (cont’d.) • The new line character is '\n' – May appear anywhere in the string ```cpp cout << "Hello there."; cout << "My name is James."; ``` **Output:** Hello there. My name is James. ```cpp cout << "Hello there.\n"; cout << "My name is James."; ``` **Output:** Hello there. My name is James. ### TABLE 2-4 Commonly Used Escape Sequences <table> <thead> <tr> <th>Escape Sequence</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>\n</td> <td>Newline: Cursor moves to the beginning of the next line</td> </tr> <tr> <td>\t</td> <td>Tab: Cursor moves to the next tab stop</td> </tr> <tr> <td>\b</td> <td>Backspace: Cursor moves one space to the left</td> </tr> <tr> <td>\r</td> <td>Return: Cursor moves to the beginning of the current line (not the next line)</td> </tr> <tr> <td>\</td> <td>Backslash: Backslash is printed</td> </tr> <tr> <td>'</td> <td>Single quotation: Single quotation mark is printed</td> </tr> <tr> <td>&quot;</td> <td>Double quotation: Double quotation mark is printed</td> </tr> </tbody> </table> C++ has a small number of operations Many functions and symbols needed to run a C++ program are provided as collection of libraries Every library has a name and is referred to by a header file Preprocessor directives are commands supplied to the preprocessor program All preprocessor commands begin with # No semicolon at the end of these commands Preprocessor Directives (cont’d.) • Syntax to include a header file: ```cpp #include <headerFileName> ``` • For example: ```cpp #include <iostream> ``` — Causes the preprocessor to include the header file `iostream` in the program • Preprocessor commands are processed before the program goes through the compiler namespace and Using cin and cout in a Program - cin and cout are declared in the header file iostream, but within std namespace - To use cin and cout in a program, use the following two statements: ```cpp #include <iostream> using namespace std; ``` Creating a C++ Program - A C++ program is a collection of functions, one of which is the function `main`. - The first line of the function `main` is called the heading of the function: ``` int main() ``` - The statements enclosed between the curly braces ({ and }) form the body of the function. Creating a C++ Program (cont’d.) • A C++ program contains two types of statements: – Declaration statements: declare things, such as variables – Executable statements: perform calculations, manipulate data, create output, accept input, etc. Use of Semicolons, Brackets, and Commas - All C++ statements end with a semicolon - Also called a statement terminator - { and } are not C++ statements - Can be regarded as delimiters - Commas separate items in a list Given the length and width of a rectangle, this C++ program computes and outputs the perimeter and area of the rectangle. ```cpp #include <iostream> using namespace std; int main() { double length; double width; double area; double perimeter; cout << "Program to compute and output the perimeter and " << "area of a rectangle." << endl; length = 6.0; width = 4.0; perimeter = 2 * (length + width); } ``` Variable declarations. A statement such as `double length;` instructs the system to allocate memory space and name it `length`. Assignment statement. This statement instructs the system to store 6.0 in the memory space `length`. Summary (cont’d) ``` length = 6.0; width = 4.0; perimeter = 2 * (length + width); area = length * width; return 0; ``` More on Input/Output Objectives • In this topic, you will: – Learn what a stream is and examine input and output streams – Explore how to read data from the standard input device – Learn how to use predefined functions in a program – Explore how to use the input stream functions get, clear and ignore Objectives (cont’d.) – Become familiar with input failure – Learn how to write data to the standard output device – Discover how to use manipulators in a program to format output – Learn how to perform input and output operations with the `string` data type – Learn how to debug logic errors – Become familiar with file input and output I/O Streams and Standard I/O Devices - **I/O**: sequence of bytes (stream of bytes) from source to destination - Bytes are usually characters, unless program requires other types of information - **Stream**: sequence of characters from source to destination - **Input stream**: sequence of characters from an input device to the computer - **Output stream**: sequence of characters from the computer to an output device I/O Streams and Standard I/O Devices (cont’d.) • Use `iostream` header file to receive data from keyboard and send output to the screen – Contains definitions of two data types: • `istream`: input stream • `ostream`: output stream – Has two variables: • `cin`: stands for common input • `cout`: stands for common output I/O Streams and Standard I/O Devices (cont’d.) • Variable declaration is similar to: – `istream cin;` – `ostream cout;` • To use `cin` and `cout`, the preprocessor directive `#include <iostream>` must be used • **Input stream variables**: type `istream` • **Output stream variables**: type `ostream` The syntax of an input statement using `cin` and the extraction operator `>>` is: ``` cin >> variable >> variable...; ``` The extraction operator `>>` is binary - Left-side operand is an input stream variable - Example: `cin` - Right-side operand is a variable cin and the get Function • The get function – Inputs next character (including whitespace) – Stores in memory location indicated by its argument • The syntax of cin and the get function: ```cpp cin.get(varChar); ``` • varChar – Is a char variable – Is the argument (or parameter) of the function cin and the ignore Function • *ignore* function – Discards a portion of the input • The syntax to use the function *ignore* is: ``` cin.ignore(intExp, chExp); ``` – *intExp* is an integer expression – *chExp* is a *char* expression • If *intExp* is a value *m*, the statement says to ignore the next *m* characters or all characters until the character specified by *chExp* cin and the ignore Function (cont’d.) Consider the declaration: ```cpp int a, b; ``` and the input: ``` 25 67 89 43 72 12 78 34 ``` Now consider the following statements: ```cpp cin >> a; cin.ignore(100, '\n'); cin >> b; ``` The first statement, `cin >> a;`, stores 25 in `a`. The second statement, `cin.ignore(100, '\n');`, discards all of the remaining numbers in the first line. The third statement, `cin >> b;`, stores 12 (from the next line) in `b`. Output and Formatting Output - Syntax of `cout` when used with `<<` ```cpp cout << expression or manipulator << expression or manipulator ...; ``` - `expression` is evaluated - `value` is printed - `manipulator` is used to format the output - Example: `endl` setprecision Manipulator • Syntax: ```cpp setprecision(n) ``` • Outputs decimal numbers with up to \( n \) decimal places • Must include the header file `iomanip`: ```cpp #include <iomanip> ``` fixed Manipulator - **fixed** outputs floating-point numbers in a fixed decimal format - Example: `cout << fixed;` - Disable by using the stream member function `unsetf` - Example: `cout.unsetf(ios::fixed);` - **scientific manipulator**: outputs floating-point numbers in scientific format showpoint Manipulator • `showpoint` forces output to show the decimal point and trailing zeros • Examples: ``` - cout << showpoint; - cout << fixed << showpoint; ``` `setw` - Outputs the value of an expression in a specified number of columns - `cout << setw(5) << x << endl;` - If number of columns exceeds the number of columns required by the expression - Output of the expression is right-justified - Unused columns to the left are filled with spaces - Must include the header file `iomanip` Additional Output Formatting Tools • Additional formatting tools that give you more control over your output: – `setfill` manipulator – `left` and `right` manipulators Types of Manipulators • Two types of manipulators: – With parameters – Without parameters • Parameterized: require `iomanip` header – `setprecision`, `setw`, and `setfill` • Nonparameterized: require `iostream` header – `endl`, `fixed`, `showpoint`, `left`, and `flush` Control Structures II (Selection) Objectives - In this topic, you will: - Learn about control structures - Examine relational and logical operators - Explore how to form and evaluate logical (Boolean) expressions - Discover how to use the selection control structures `if`, `if...else`, and `switch` in a program Logical (Boolean) Operators and Logical Expressions - **Logical (Boolean) operators**: enable you to combine logical expressions <table> <thead> <tr> <th>Operator</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>!</td> <td>not</td> </tr> <tr> <td>&amp;&amp;</td> <td>and</td> </tr> <tr> <td></td> <td></td> </tr> </tbody> </table> The `bool` Data Type and Logical (Boolean) Expressions - The data type `bool` has logical (Boolean) values `true` and `false` - `bool`, `true`, `and` `false` are reserved words - The identifier `true` has the value `1` - The identifier `false` has the value `0` Selection: if and if...else - **if** and **if...else** statements can be used to create: - One-way selection - Two-way selection - Multiple selections One-Way Selection • One-way selection syntax: ```c++ if (expression) statement ``` • Statement is executed if the value of the expression is true • Statement is bypassed if the value is false; program goes to the next statement • Expression is called a decision maker Two-Way Selection - Two-way selection syntax: ```cpp if (expression) statement1 else statement2 ``` - If expression is true, `statement1` is executed; otherwise, `statement2` is executed - `statement1` and `statement2` are any C++ statements Compound (Block of) Statements - **Compound statement (block of statements):** - A compound statement functions like a single statement if (age > 18) { cout << "Eligible to vote." << endl; cout << "No longer a minor." << endl; } else { cout << "Not eligible to vote." << endl; cout << "Still a minor." << endl; } Multiple Selections: Nested if - **Nesting**: one control statement is located within another - An `else` is associated with the most recent `if` that has not been paired with an `else` Multiple Selections: Nested if (cont’d.) EXAMPLE 4-16 Assume that score is a variable of type int. Based on the value of score, the following code outputs the grade: ```cpp if (score >= 90) cout << "The grade is A." << endl; else if (score >= 80) cout << "The grade is B." << endl; else if (score >= 70) cout << "The grade is C." << endl; else if (score >= 60) cout << "The grade is D." << endl; else cout << "The grade is F." << endl; ``` Short-Circuit Evaluation - **Short-circuit evaluation**: evaluation of a logical expression stops as soon as the value of the expression is known - **Example**: ``` (age >= 21) || (x == 5) //Line 1 (grade == 'A') && (x >= 7) //Line 2 ``` Confusion Between the Equality (==) and Assignment (=) Operators • C++ allows you to use any expression that can be evaluated to either true or false as an expression in the if statement: ```cpp if (x = 5) cout << "The value is five." << endl; ``` • The appearance of = in place of == resembles a silent killer – It is not a syntax error – It is a logical error Conditional Operator (?:) - Conditional operator (?:) - Ternary operator: takes 3 arguments - Syntax for the conditional operator: ``` expression1 ? expression2 : expression3 ``` - If `expression1` is true, the result of the conditional expression is `expression2` - Otherwise, the result is `expression3` - Example: `max = (a >= b) ? a : b;` switch Structures • **switch structure**: alternate to if-else • switch (integral) expression is evaluated first • Value of the expression determines which corresponding action is taken • Expression is sometimes called the selector Control Structures II (Repetition) Objectives • In this topic, you will: – Learn about repetition (looping) control structures – Explore how to construct and use counter-controlled, sentinel-controlled, flag-controlled, and EOF-controlled repetition structures – Examine break and continue statements – Discover how to form and use nested control structures while Looping (Repetition) Structure - Syntax of the `while` statement: ```cpp while (expression) statement ``` - `statement` can be simple or compound - `expression` acts as a decision maker and is usually a logical expression - `statement` is called the body of the loop - The parentheses are part of the syntax while Looping (Repetition) Structure (cont’d.) EXAMPLE 5-1 Consider the following C++ program segment: (Assume that i is an int variable.) ```cpp i = 0; //Line 1 while (i <= 20) //Line 2 { //Line 3 cout << i << " "; i = i + 5; } //Line 4 cout << endl; Sample Run: 0 5 10 15 20 ``` Case 1: Counter-Controlled while Loops - When you know exactly how many times the statements need to be executed - Use a counter-controlled while loop ```cpp counter = 0; //initialize the loop control variable while (counter < N) //test the loop control variable { . . . counter++; //update the loop control variable . . . } ``` Case 2: Sentinel-Controlled while Loops - **Sentinel** variable is tested in the condition - Loop ends when sentinel is encountered ```cpp cin >> variable; //initialize the loop control variable while (variable != sentinel) //test the loop control variable { cin >> variable; //update the loop control variable } ``` Case 3: Flag-Controlled while Loops - **Flag-controlled while loop**: uses a `bool` variable to control the loop ```cpp found = false; //initialize the loop control variable while (!found) //test the loop control variable { if (expression) found = true; //update the loop control variable } ``` • The expression in a `while` statement can be complex – Example: ```cpp while ((noOfGuesses < 5) && (!isGuessed)) { // ... } ``` for Looping (Repetition) Structure • *for* loop: called a counted or indexed *for* loop • Syntax of the *for* statement: ``` for (initial statement; loop condition; update statement) statement ``` • The *initial statement*, *loop condition*, and *update statement* are called *for* loop control statements **for Looping (Repetition) Structure (cont’d.)** **EXAMPLE 5-9** The following `for` loop prints the first 10 nonnegative integers: ``` for (i = 0; i < 10; i++) cout << i << " "; cout << endl; ``` The initial statement, `i = 0;`, initializes the `int` variable `i` to 0. Next, the loop condition, `i < 10`, is evaluated. Because `0 < 10` is `true`, the print statement executes and outputs 0. The update statement, `i++`, then executes, which sets the value of `i` to 1. Once again, the loop condition is evaluated, which is still `true`, and so on. When `i` becomes 10, the loop condition evaluates to `false`, the `for` loop terminates, and the statement following the `for` loop executes. for Looping (Repetition) Structure (cont’d.) **Example 5.10** 1. The following `for` loop outputs `Hello!` and a star (on separate lines) five times: ```cpp for (i = 1; i <= 5; i++) { cout << "Hello!" << endl; cout << "*" << endl; } ``` 2. Consider the following `for` loop: ```cpp for (i = 1; i <= 5; i++) { cout << "Hello!" << endl; cout << "*" << endl; } ``` This loop outputs `Hello!` five times and the star only once. Note that the `for` loop controls only the first output statement because the two output statements are not made into a compound statement. Therefore, the first output statement executes five times because the `for` loop body executes five times. After the `for` loop executes, the second output statement executes only once. The indentation, which is ignored by the compiler, is nevertheless misleading. do...while Looping (Repetition) Structure • Syntax of a `do ... while` loop: ```cpp do statement while (expression); ``` • The `statement` executes first, and then the `expression` is evaluated - As long as `expression` is true, loop continues • To avoid an infinite loop, body must contain a statement that makes the `expression` false **do...while Looping (Repetition) Structure (cont’d.)** - The statement can be simple or compound - Loop always iterates at least once do...while Looping (Repetition) Structure (cont’d.) ``` EXAMPLE 5-18 i = 0; do { cout << i << " "; i = i + 5; } while (i <= 20); The output of this code is: 0 5 10 15 20 After 20 is output, the statement: i = i + 5; changes the value of i to 25 and so i <= 20 becomes false, which halts the loop. ``` break and continue Statements • **break** and **continue** alter the flow of control • **break** statement is used for two purposes: – To exit early from a loop • Can eliminate the use of certain (flag) variables – To skip the remainder of a **switch** structure • **After break executes, the program continues with the first statement after the structure** break and continue Statements (cont’d.) • **continue** is used in **while**, **for**, **and** **do...while** structures • When executed in a loop – It skips remaining statements and proceeds with the next iteration of the loop **DO NOT USE break or continue in any UOW subjects when dealing with repetition control structure** Let us now look at workshop 1 question
{"Source-Url": "https://www.uow.edu.au/~akheng/WORKSHOP/Module_1.pdf", "len_cl100k_base": 8324, "olmocr-version": "0.1.53", "pdf-total-pages": 96, "total-fallback-pages": 0, "total-input-tokens": 127528, "total-output-tokens": 11658, "length": "2e13", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.0004794597625732422, "__label__crime_law": 0.00030517578125, "__label__education_jobs": 0.004180908203125, "__label__entertainment": 8.958578109741211e-05, "__label__fashion_beauty": 0.0001854896545410156, "__label__finance_business": 0.0001556873321533203, "__label__food_dining": 0.0005121231079101562, "__label__games": 0.0013475418090820312, "__label__hardware": 0.0011129379272460938, "__label__health": 0.00034308433532714844, "__label__history": 0.0002613067626953125, "__label__home_hobbies": 0.00012755393981933594, "__label__industrial": 0.0004115104675292969, "__label__literature": 0.0003020763397216797, "__label__politics": 0.00024509429931640625, "__label__religion": 0.0006432533264160156, "__label__science_tech": 0.0044097900390625, "__label__social_life": 0.00012153387069702148, "__label__software": 0.003997802734375, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0003917217254638672, "__label__transportation": 0.0007152557373046875, "__label__travel": 0.00026726722717285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30341, 0.02043]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30341, 0.76862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30341, 0.77217]], "google_gemma-3-12b-it_contains_pii": [[0, 31, false], [31, 274, null], [274, 324, null], [324, 797, null], [797, 1165, null], [1165, 1476, null], [1476, 1676, null], [1676, 1926, null], [1926, 2322, null], [2322, 2503, null], [2503, 2829, null], [2829, 2886, null], [2886, 3052, null], [3052, 3260, null], [3260, 3557, null], [3557, 3907, null], [3907, 4515, null], [4515, 4788, null], [4788, 5003, null], [5003, 5309, null], [5309, 5576, null], [5576, 5790, null], [5790, 6018, null], [6018, 7594, null], [7594, 7855, null], [7855, 8273, null], [8273, 8622, null], [8622, 8756, null], [8756, 9067, null], [9067, 9352, null], [9352, 9668, null], [9668, 10049, null], [10049, 10365, null], [10365, 10858, null], [10858, 11277, null], [11277, 11570, null], [11570, 12421, null], [12421, 12730, null], [12730, 13524, null], [13524, 13877, null], [13877, 14199, null], [14199, 14458, null], [14458, 14761, null], [14761, 15007, null], [15007, 15230, null], [15230, 15912, null], [15912, 16033, null], [16033, 16054, null], [16054, 16344, null], [16344, 16682, null], [16682, 17111, null], [17111, 17452, null], [17452, 17761, null], [17761, 18027, null], [18027, 18336, null], [18336, 18726, null], [18726, 19189, null], [19189, 19464, null], [19464, 19668, null], [19668, 19967, null], [19967, 20146, null], [20146, 20496, null], [20496, 20669, null], [20669, 20948, null], [20948, 20982, null], [20982, 21270, null], [21270, 21536, null], [21536, 21799, null], [21799, 21957, null], [21957, 22230, null], [22230, 22496, null], [22496, 22634, null], [22634, 22827, null], [22827, 23014, null], [23014, 23477, null], [23477, 23730, null], [23730, 24103, null], [24103, 24460, null], [24460, 24696, null], [24696, 24731, null], [24731, 25063, null], [25063, 25392, null], [25392, 25714, null], [25714, 26076, null], [26076, 26409, null], [26409, 26740, null], [26740, 26900, null], [26900, 27212, null], [27212, 27912, null], [27912, 28805, null], [28805, 29152, null], [29152, 29288, null], [29288, 29602, null], [29602, 29971, null], [29971, 30303, null], [30303, 30341, null]], "google_gemma-3-12b-it_is_public_document": [[0, 31, true], [31, 274, null], [274, 324, null], [324, 797, null], [797, 1165, null], [1165, 1476, null], [1476, 1676, null], [1676, 1926, null], [1926, 2322, null], [2322, 2503, null], [2503, 2829, null], [2829, 2886, null], [2886, 3052, null], [3052, 3260, null], [3260, 3557, null], [3557, 3907, null], [3907, 4515, null], [4515, 4788, null], [4788, 5003, null], [5003, 5309, null], [5309, 5576, null], [5576, 5790, null], [5790, 6018, null], [6018, 7594, null], [7594, 7855, null], [7855, 8273, null], [8273, 8622, null], [8622, 8756, null], [8756, 9067, null], [9067, 9352, null], [9352, 9668, null], [9668, 10049, null], [10049, 10365, null], [10365, 10858, null], [10858, 11277, null], [11277, 11570, null], [11570, 12421, null], [12421, 12730, null], [12730, 13524, null], [13524, 13877, null], [13877, 14199, null], [14199, 14458, null], [14458, 14761, null], [14761, 15007, null], [15007, 15230, null], [15230, 15912, null], [15912, 16033, null], [16033, 16054, null], [16054, 16344, null], [16344, 16682, null], [16682, 17111, null], [17111, 17452, null], [17452, 17761, null], [17761, 18027, null], [18027, 18336, null], [18336, 18726, null], [18726, 19189, null], [19189, 19464, null], [19464, 19668, null], [19668, 19967, null], [19967, 20146, null], [20146, 20496, null], [20496, 20669, null], [20669, 20948, null], [20948, 20982, null], [20982, 21270, null], [21270, 21536, null], [21536, 21799, null], [21799, 21957, null], [21957, 22230, null], [22230, 22496, null], [22496, 22634, null], [22634, 22827, null], [22827, 23014, null], [23014, 23477, null], [23477, 23730, null], [23730, 24103, null], [24103, 24460, null], [24460, 24696, null], [24696, 24731, null], [24731, 25063, null], [25063, 25392, null], [25392, 25714, null], [25714, 26076, null], [26076, 26409, null], [26409, 26740, null], [26740, 26900, null], [26900, 27212, null], [27212, 27912, null], [27912, 28805, null], [28805, 29152, null], [29152, 29288, null], [29288, 29602, null], [29602, 29971, null], [29971, 30303, null], [30303, 30341, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30341, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30341, null]], "pdf_page_numbers": [[0, 31, 1], [31, 274, 2], [274, 324, 3], [324, 797, 4], [797, 1165, 5], [1165, 1476, 6], [1476, 1676, 7], [1676, 1926, 8], [1926, 2322, 9], [2322, 2503, 10], [2503, 2829, 11], [2829, 2886, 12], [2886, 3052, 13], [3052, 3260, 14], [3260, 3557, 15], [3557, 3907, 16], [3907, 4515, 17], [4515, 4788, 18], [4788, 5003, 19], [5003, 5309, 20], [5309, 5576, 21], [5576, 5790, 22], [5790, 6018, 23], [6018, 7594, 24], [7594, 7855, 25], [7855, 8273, 26], [8273, 8622, 27], [8622, 8756, 28], [8756, 9067, 29], [9067, 9352, 30], [9352, 9668, 31], [9668, 10049, 32], [10049, 10365, 33], [10365, 10858, 34], [10858, 11277, 35], [11277, 11570, 36], [11570, 12421, 37], [12421, 12730, 38], [12730, 13524, 39], [13524, 13877, 40], [13877, 14199, 41], [14199, 14458, 42], [14458, 14761, 43], [14761, 15007, 44], [15007, 15230, 45], [15230, 15912, 46], [15912, 16033, 47], [16033, 16054, 48], [16054, 16344, 49], [16344, 16682, 50], [16682, 17111, 51], [17111, 17452, 52], [17452, 17761, 53], [17761, 18027, 54], [18027, 18336, 55], [18336, 18726, 56], [18726, 19189, 57], [19189, 19464, 58], [19464, 19668, 59], [19668, 19967, 60], [19967, 20146, 61], [20146, 20496, 62], [20496, 20669, 63], [20669, 20948, 64], [20948, 20982, 65], [20982, 21270, 66], [21270, 21536, 67], [21536, 21799, 68], [21799, 21957, 69], [21957, 22230, 70], [22230, 22496, 71], [22496, 22634, 72], [22634, 22827, 73], [22827, 23014, 74], [23014, 23477, 75], [23477, 23730, 76], [23730, 24103, 77], [24103, 24460, 78], [24460, 24696, 79], [24696, 24731, 80], [24731, 25063, 81], [25063, 25392, 82], [25392, 25714, 83], [25714, 26076, 84], [26076, 26409, 85], [26409, 26740, 86], [26740, 26900, 87], [26900, 27212, 88], [27212, 27912, 89], [27912, 28805, 90], [28805, 29152, 91], [29152, 29288, 92], [29288, 29602, 93], [29602, 29971, 94], [29971, 30303, 95], [30303, 30341, 96]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30341, 0.05932]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
31006266cf29dee85b830651220b9bc28db09352
Categorization of RDF Data Management Systems Khadija Alaoui*, Mohamed Bahaj MIET Lab, Faculty of sciences and Techniques, Hassan I University, Settat, 26422, Morocco ARTICLE INFO Article history: Received: 28 December, 2020 Accepted: 23 February, 2021 Online: 10 March, 2021 Keywords: Triplestore RDF OWL SPARQL Semantic web Big Data Cloud NoSQL IoT ABSTRACT The wide acceptance of the semantic web language RDF for ontologies creation in various application fields has led to the emergence of numerous RDF data processing solutions, the so-called triplestores, for the storage of RDF data and its querying using the RDF query language SPARQL. Such solutions are however developed under various perspectives and on the basis of various architectures. It is therefore a necessity for users to be able to distinguish between these systems to decide about the appropriate triplestore for an efficient processing of their RDF data depending on their objectives, the characteristics of their data and the technologies at hand. To this end, we give an extended categorization of RDF data stores according to their main characteristics. Furthermore, we review relevant existing triplestores within their respective established categories. The categorization is established according to the motivations behind the adoption of one or the other triplestore for handling the main tasks of data storage and SPARQL querying. Furthermore, the categorization considers various aspects that specifically deal with RDF data modeling, organization of RDF data, the processing of SPARQL queries, scalability, as well as aspects related to the diverse related data processing technologies. 1. Introduction The “Resource Description Framework” (RDF) has been worldwide used during the last two decades for creating semantic ontologies in various application areas, and it is standardized by the “World Wide Web Consortium” (W3C) as the language of the semantic web (https://www.w3.org/TR/rdf11-primer/). RDF represents data in the form of (S, P, O) triples to express the semantic information that an entity or a resource S is in a relationship through the relation or predicate P with an object O that is either a resource or a literal value. This modeling art lets represent data as RDF directed labeled graphs where in each graph, resources and literal values are representing nodes of the graph and a node n1 is connected to a node n2 with an arc labeled by a predicate P if (n1, P, n2) is an RDF triple. To query the RDF triples, W3C also launched the standard language SPARQL (“Simple Protocol and RDF Query Language” - https://www.w3.org/TR/sparql11-overview/). For interlinking purposes and for ontologies identification, entities are also endowed with URIs (Unique Resource Identifier). This mechanism has the advantage of assigning resources to groups, also called ontologies, and allowing interlinking resources of one group to resources of other groups yielding heterogeneous RDF data graphs. It is exactly this simple semantic format offered by RDF to model data within ontologies that led to the transformation of the classical web to change it from a web of static pages to an intelligent web of interlinked data. The RDF format makes it indeed possible for machines to intelligently navigate inside the interlinked data since it enables formulating semantics about such data. Furthermore, the schema languages RDFS (“RDF Schema” - https://www.w3.org/TR/rdf-schema/) and OWL (“Web Ontology Language” - https://www.w3.org/TR/owl2-syntax/), which are also W3C standards, do offer various semantic constructs to model the schemas of RDF data and allow intelligent navigation through such data using inference and reasoning techniques. RDF also offers various advantages for semantic modeling of enterprise data through its flexible schema definition and also offers a better alternative to the classical entity-relationship modeling approach [1], [2]. All these factors have led to the appearance of an important number of management systems for handling the storage and the querying of RDF data. The abundance and variety of RDF data processing systems, also called triplestores, was also encouraged in a natural way by the emergence of various technologies such as NoSQL (Not only SQL - (Structured Query Language)), P2P (Peer to peer) and Big Data ones and was also imposed by the multiple varieties of RDF applications. A multitude of RDF triplestores have indeed been *Corresponding Author: Khadija Alaoui, alaoui_khadija@outlook.com www.astesj.com https://dx.doi.org/10.25046/aj060225 developed, each with its own features that distinguish it from other triplestores. So, for a specific use case or application involving the use of RDF for data modeling, an appropriate RDF storage and processing system must however be well chosen from the multitude of existing RDF triplestores dependently of multiple factors. In this sense, this work presents an extensive extension of the preliminary categorization of triplestores we gave in our conference paper [3]. The extension consists of a detailed categorization of RDF management systems with a review of relevant triplestores within their associated categories. Beyond the respect of RDF modeling constructs and implementation of elements of its query language SPARQL, RDF data management systems are filtered in accordance to the strategies used either for query processing or data storage. The strategies are enforced on one hand by the system architecture used if it is centralized or distributed, if it is a P2P, a cloud or a big data one. On the other hand, such strategies also depend on the adopted storage and querying methods, if they are relying on other existing data processing frameworks or if they are designed from scratch independently of any such frameworks. Furthermore, each category is presented according to the strategy used to handle RDF data storage and processing taking into consideration the structures used for its storage, indexing schemes and SPARQL implementation. For the organization of data storage, partitioning and indexing schemes are of particular interest since they affect the speed of query execution. This detailed categorization dependently of the data processing architectures and of the used systems characteristics and the targeted deployment machines represents therefore our main contribution in this work. The categorization with respect to such elements is of great importance for data management since they affect in a direct way the performance as well as the scalability of the triplestores at hand. To illustrate the given categorization we also review major relevant existing triplestores within their respective established categories. The following sections of the paper are structured as follows. Section 2 presents the semantic web standards RDF, SPARQL, RDFS and OWL. Section 3 gives a summary of our categorization approach. Sections 4 to 9 present the main categories with their respective sub-categories. Section 10 summarizes the categories with a discussion on related works. Section 11 concludes this work. The representation of data with RDF is based on modeling all information as a set of sentences of the form 'Subject Predicate Object' yielding triples (S:=Subject, P:=Predicate, O:=Object). Each triple (S,P,O) gives the meaning that the resource S is in a relationship through P with the object O. Objects can be either resources or literal values. In the example of figure 2, we have for example the triple (ex:Jabir,ex:teach,ex:java). 2.1. RDF Data Model The representation of data with RDF is based on modeling all information as a set of sentences of the form 'Subject Predicate Object' yielding triples (S:=Subject, P:=Predicate, O:=Object). Each triple (S,P,O) gives the meaning that the resource S is in a relationship through P with the object O. Objects can be either resources or literal values. In the example of figure 2, we have for example the triple (ex:Jabir,ex:teach,ex:java). Figure 2: RDF Example in N3-Notation 2.2. RDFS and OWL RDFS offers constructs to describe elements of an RDF graph in a meta-model. The statements in the RDFS meta-model are also expressed as RDF triples. The meta-model declares the classes of resources and predicates used in the RDF graph. Ranges and domains of predicates can also be given in the meta-model. RDFS also offers the possibility of creating hierarchies between classes using constructs such as “subClassOf”. For example, in the example of Figure 3, the class “ex:course” is declared as a subclass of “ex:teachingactivity”. OWL extends RDFS with many semantic constructs allowing the definition of more expressive RDF graphs and offering more reasoning possibilities on them. OWL meta-models are also expressed in RDF which makes the reasoning based on description logic easier. As examples of constructs in OWL we mention ‘ObjectProperty’ and ‘DatatypeProperty’ for the definition of types of predicates, and ‘AllValuesFrom’, ‘SomeValuesFrom’, ‘ComplementOf’ and ‘DisjointWith’ for constraints on domains and ranges of predicates. OWL also provides constructs for the creation of new types from other types as well as constructs for properties on predicates, for example, if they are invertible, symmetric or transitive. 2.3. SPARQL To query RDF data, the query language SPARQL has been proposed and standardized by the W3C. SPARQL is very similar to SQL and can perform complex joins of various RDF data graphs in the same query. Figure 4 gives a simple example for looking after who teaches “java”. ``` Prefix ex: <https://www.mySite.ma/example#> SELECT ?x WHERE ?x ex:teach ex:java . ``` Beyond SELECT queries to extract information from RDF data, SPARQL also offers ASK queries that return either true if the query condition is satisfied and false if it is not satisfied, and CONSTRUCT to add new triplets to RDF graphs as results of such queries, as well as DESCRIBE queries that extract information about a resource. SPARQL queries can also handle aggregations and may contain optional clauses with optional conditions as well as a FILTER clause to further filter their results. 3. Categorization approach The categorization approach we are using is mainly based on the context in which RDF data is used. Within this context the following elements are considered: - The storage technique used: We mainly focus on its adaptation to RDF model and for which environment it is used. With environment we consider the use of the solution on only one machine or on a cluster of machines and if the solution is for use in Cloud, P2P or desktop context. - Nature of destination devices: we handle the case of using RDF data either in constrained devices, desktops or clusters. - System scalability: we especially take into account the separation of solutions dependently on data volumes to be processed. - Data organization: This point is very important since SPARQL queries may pose many challenges related among others to join and sub-graph processing especially when RDF data are scattered among various graphs or stored in multiple files or in multiple nodes. 4. Native versus Non-Native Triplestores Native RDF data systems are those systems that are built from scratch only for the purpose of handling RDF data without relying on any existent data management solution. This means that the solutions associated with such native stores are implemented independently of any existing specific database engine for the storage or querying of any kind of data. To achieve their tasks, native stores may however be built using functionalities of the file system under hand and of course existing programming languages such as C, C++ and Java. In contrary, non-native triplestores are those stores that rely on already existing data management solutions such as, for example, relational, XML, NoSQL database management systems or also Big Data technologies for data processing such as HBase or Pig. Figure 5 illustrates the considered “native/non-native” categorization. 4.1. Non-Native triplestores As examples of non-native triplestores we have Jena SDB (https://jena.apache.org), triplestores that are based on existing classical relational database systems and triplestores that are based on NoSQL database systems. The category of relational triplestores is treated in section 6.1 and the category of non-relational triplestore is considered in section 6.2. The Jena framework is implemented in Java. It has been continuously updated since its launching in the year 2000. Jena uses a data structure called model to represent an RDF graph with associated methods to manipulate its nodes which can be resources, blank nodes or literals. Also Jena creates triples as instances of the Statement class. Jena also comes with a reasoning module for inferencing based on some RDFS as well as OWL constructs and also based on rules that are defined by users. Furthermore, a Jena server called Fuseki is also provided for SPARQL querying over HTTP. Jena SDB uses Jena APIs and JDBC for handling RDF data in a relational database system. It will be further detailed in the category of single-table relational triplestores category. 4.2. Native triplestores As already mentioned, in contrast to non-native stores which are setup to run on top of other existing database processing solutions, native stores are built specially for the RDF model to provide persistent storage with own database implementation solutions. Examples of such native store are RDF-3X [4], AllegroGraph (https://allegrograph.com), Stardog (http://stardog.com), Jena TDB [5], Mulgara (http://www.mulgara.org), RDFox (http://www.cs.ox.ac.uk/ isg/ tools/RDFox) and CliqueSquare [6], [7]. AllegroGraph store uses RDF-XML and N-Triples to load the triples. The implemented query language is SPARQL, however external programming APIs can be used to find datasets matching specific triples. CliqueSquare uses the distributed file system of Hadoop for storing data and its MapReduce implementation for the processing of RDF data. 5. Memory-Based versus Disk-Based Triplestores Memory based triplestores, also called in-memory databases, rely on main memory for data storage. As the memory access is faster than disk access, these triplestores allow quick access to data and faster query execution. Memory based triplestores show therefore best performance since entire datasets are in memory. Figure 6 shows the two considered categories which are presented next. ![Figure 6: "Memory / Disc" Categorization](image) 5.1. Memory-based triplestores As the name indicates, main-memory-based triplestores fully load RDF data in main memory to do processing on it. Jena TDB, TrinityRDF [8], AdHash [9], ClioPatria [10] and ScalaRDF [11] are examples of memory based triplestores. TrinityRDF allows the store of trillions of triples. It represents entities as graph nodes while the relations are represented as graph edges. Trinity supports parallel computing and handles massive number of in-memory objects as well as complex data with large schemas; however, it does not guarantee serialization for concurrent threads. AdHash uses the principle of applying lightweight hash partitioning to distribute the triples by using a hashing according to subjects in order to limit the data communication costs for join queries. AdHash elaborates this by monitoring the data access patterns and gradually redistributing and replicating the accessed data. By increasing the in parallel executed join operations, AdHash improves the queries execution time. 5.2. Disk-based triplestores The triplestores in this category interact with RDF data through programs loading from disk the portions of data each time when they are needed. In this category we have of course those triplestores that use engines of relational database systems for processing RDF data such as Virtuoso [12] and 4store (https://github.com/4store/4store). We also have Big Data RDF processing solutions that rely on Hadoop or Spark frameworks for managing RDF data and which will be presented in section 8. 6. Relational versus Non-Relational Categorization During the first years of the semantic web, the focus was mainly on the use of relational database (RDB) systems for the storage and processing of RDF data on one hand for their dominance and on the other hand for the aim to benefit from associated during years developed technologies with respect to efficient data processing as well as to users APIs. However, such use of these relational systems still face many challenges such as the need for efficient solutions to reduce the added time complexities due to the need of translating SPARQL queries into SQL ones. Also, there are still some difficulties faced by the semantic web world for the use of object oriented based application frameworks and programming languages. Furthermore, the dynamicity of RDF data generally poses a challenging problem to relational database designers since relational schemas generally rely on static schemas to model the tables of their databases. 6.1. Relational Triplestores Relational RDF stores use relational database (RDB) systems to store and query RDF data. Figure 7 presents the categories of such stores which will be detailed next. ![Figure 7: "Relational Triplestores" Category](image) hashing, B/B+ trees) offered by RDBMs and query optimization techniques based on relational algebra operators as well as value typing. Also relational triplestores allow easy data integration of relational databases into RDF models or of other data sources with the use of existing data mapping and conversion techniques for transforming and storing such data sources into relational databases. A further positive point of relational triplestores is the possibility to use existing data analytics tools (e.g., machine learning, business intelligence) developed for RDBMs. However extensions in this sense are still to be considered in the context of the nature of RDF data. Relational triplestores also suffer from the limitation related to the high processing costs due to RDF data loading in RDBMs and also to the need of translating SPARQL queries into SQL ones for data processing. Another drawback of RDF stores is that they are in majority centralized solutions which let them not to be adequate for massive RDF data management. A further negative point of relational triplestores is the lack of user involvement to use equivalent functionalities that are already offered to SQL users such as creation of indexes or programming interfaces. 6.1.1. Non-Object Relational stores Since the beginning of the semantic web, various solutions to store RDF in classical non-object relational database (RDB) systems have been proposed. They mainly depend on how the RDF triples are distributed with the appropriate relational schemas. In the following we present the main sub-categories of RDB triplestores and their RDB used schemas to manage RDF data. 6.1.1.1. Single vertical-table RDB triplestores This category contains those relational stores that store triples in a single table with a column for subjects, a column for predicates, a column for objects and possibly a column for graphs to which triples belong. In this category we have the triplestores Jena SDB, 3Store [13], 4Store [14], Sesame [15] and Hexastore [16]. Jena-SDB which is an RDB triplestore can be used with a large number of RDBs which let it benefit from the indexing capabilities provided by RDB. Applications may use JDBC connector to store RDF triples in Jena SDB. The use of Jena-SDB is only recommended when it is necessary to layer on an existing SQL deployment. However, if explicit transactions support is required, Jena SDB reuses the transaction model from the underlying relational database. 3Store has a query engine developed using the C language and it is implemented on top of MySQL. Hash tables are used respectively for resources, literals and graphs to encode such objects. Triples are stored in a single table that stores for each triple its associated hash code that is used as a reference key to its entry in the associated table. RDF data can be accessed via RDQL based on an apache server interface and a query engine that translates RDQL query into SQL query. 4Store runs on a cluster. Its design is based on 3Store. Data in 4Store is stored as quads (G:graph, S,P,O) where G is the graph to which the triple ‘(S,P,O)’ belongs. Triples in 4Store are partitioned using hashing on the identifiers of subjects. This partitioning strategy can however lead to nodes that are heavily loaded with data than the others and may therefore lead to high query processing time costs. 6.1.1.2. Single-horizontal-table RDB stores The systems of this category use a single table, also called predicate table, with a column for subjects and a column for each possible predicate. Each resource is then stored with all values of its associated predicates in one line of the table and NULL values for the other predicates. Therefore, this approach could lead to lines in the table with many NULL values resulting in large processing times. 6.1.1.3. Predicate-oriented RDB triplestores These systems associates a relation with each column with each predicate for its (subject, object) pairs [17]. C-Store [18] and SW-Store [19] are examples of such stores. An advantage of this storing approach is that it is easy to implement and resolve the NULL values problem of single-horizontal-table. However this approach has the disadvantage that it comes with a huge number of tables which involves large number of joins in query execution. 6.1.1.4. Type-oriented RDB triplestores These systems associate a relation with each RDF resource type (i.e., for each class of objects) with one column for subjects of this common class and a column for each predicate associated with such subjects. Triples with subjects not belonging to any class are stored together separately in a table of three columns respectively associated with subjects, predicates and objects. FlexTable [20] and RDFBroker [21] are examples of such stores. RDFBroker is based on occurring predicates signatures in RDF data. The set of predicates that occur with a resource in the triples is called its signature. These signatures together build the signature of the graph. A signature graph is then constructed with nodes being the signatures, and an edge from one signature to a second one means that the first signature is a subset of the second one. RDFBroker then creates for each signature in the graph a table with a first column for the subjects appearing with the signature predicates in the RDF data triples and one column for each of these predicates. Triples are then put in the suited table according to the signature of their subjects. Such a strategy does remedy to the problem of NULLs posed by the single-horizontal-table approach. 6.1.1.5. Hybrid triplestores Hybrid triplestores consist of stores that use combination of previous approaches. Within this category, we principally have stores that cluster the predicates that appear together with respect to a clustering criterion. For each cluster of predicates, a table is created with columns represented by the cluster predicates and a column for the subjects, and triples with these predicates are put together in this so-called property table. To store the triples with the remaining predicates, which are not clustered, a single-vertical-table is used. AdaptRDF [22] is an example of hybrid triplestores. It uses a single vertical table as well as property tables. First all triples are put in the vertical table and with respect to the queries load property tables are created to partition the vertical table and further dynamically adjusted with respect to predicates based on a mathematical process which considers the query workloads over time. 6.1.2. Object Relational Stores Object relational databases (ORDBs) offer the possibility to encapsulate data within objects. They also give methods to serialize objects, generally as key value associations, or compound values. ORDBs are specially needed in fields with complex data objects and objects interactions with data represented as a collection of objects. Conversion and storage techniques from RDF into ORDBs have mainly been inspired from previous similar works on the processing of XML documents in ORDBs. One solution in this sense is the one proposed with its prototype in [23]. In this solution, for each document a model instance of a class Model is created to manage the elements of the document. A class Resource is defined to instantiate a resource or a predicate and a class Literal is used to instantiate literals. All these three classes are subclasses of a class Node which gathers common attributes. Also a class Statement is provided for instantiating triples with Node-components. Using these classes, the various elements of the RDF document are scanned and stored as objects using the methods of the class Model. Also worth mentioning are OO-Store [24] which is proposed as a prototype implementation for the processing of RDF data based on ORDBs, and ActiveRDF [25] which also comes with programming elements for the management of RDF data. All object oriented relational RDF solutions with their OO programming constructs have the advantage of being open for possible further extensions to interact with the widely developed object oriented solutions either for data engineering or data accessing (e.g., UML, Spring, hibernate, ..) as well as with other programming languages. Also ORDB triplestores do profit from the advantages offered by RDBMs. Ways are however still needed to extend such triplestores with object oriented graph APIs for an object oriented programming perspective within the context of RDF data. 6.2. Non-Relational triplestores Some of the key factors that have motivated the looking for other types of RDF processing systems other than relational ones are of course the limitations of RDB systems with respect to the variability that needs dynamic and flexible schemas other than static RDB schemas. Figure 8 illustrates the categorization of such systems whose sub-categories are presented in the following subsections. 6.2.1. Binary Triplestores The class of binary stores consists of triplestores that use bits to encode RDF triples. BitMat [26] and TripleBit [27] are two examples of such stores. In BitMat each line of the matrix is associated with one subject and each column is associated with one predicate. Each entry in the i-th line and j-th column of the matrix is a sequence of bits from the set {0,1} with only one bit 1 whose position k representing the presence of the triple (i-th subject, j-th predicate, k-th object). BitMat benefits from the use of 0-1 sequences for representing RDF triples to use them to compress the RDF data. Querying of RDF data is done in two steps in BitMat. Candidate matches are derived from the bit matrix in first step and the exact matches are returned in a second step. Though the advantages offered by the bit representation and the possible compression on it, this technique still needs however to tackle the problems faced by insertion or deletion of RDF triples. 6.2.2. NoSQL-Document triplestores Document stores, like MongoDB and CouchDB, do use documents to persist data. A document is organized as a collection of fields where each field is associated with a set of values. Each one of the fields could be used as an index for data retrieval. The RDF triplestores D-SPARQ [28] and RDFMongo [29] are examples of document-oriented store that use MongoDB. Another NoSQL document solution for processing RDF data which we call CouchbaseRDF was presented in [30] to store RDF data in Couchbase (https://www.couchbase.com) which is a JSON-based document store. 6.2.3. NoSQL-Key-Value triplestores The category of NoSQL-key-value triplestores consists of those RDF stores that use a NoSQL key-value database system for storing and querying RDF triples. NoSQL key-value database systems store data as collections of key/value pairs and offer get(key) and put(key, value) access methods to read and write data. Redis (REmote Dictionery Server - https://github.com/redis/redis) and DynamoDB (https://aws.amazon.com/fr/dynamodb) are examples of key-value NoSQL database management systems. Redis is an in-memory data management system. DynamoDB offers various characteristics such as replication and buck up of data and the possibility of its integration in web applications. An example of NoSQL-key-value RDF stores using Redis is ScalaRDF which is also a distributed and memory-based store. As an example of triplestores using DynamoDB we have AMADA [31]. 6.2.4. NoSQL-Graph triplestores Graph oriented triplestores simply use the graph representation of RDF data and store these data as directed graphs where the nodes are either resources or literals and an edge starting from a node n1 to a node n2 is labeled with a predicate p to mean that (n1,p,n2) is a triple. In the family of NoSQL-graph triplestores we have gStore [32], Dydra (http://dydra.com), AllegroGraph, BlazeGraph (http://blazegraph.com) and S2X (SPARQL on Spark with GraphX) [33]. The triplestore gStore is a centralized graph-oriented solution that uses bit-encoding to encode RDF triples as well as to encode SPARQL queries in the same way. Codes of SPARQL queries are then simply matched to the encoding list of RDF data. gstore has also been extended to a distributed solution called gStoreD. AllegroGraph is a high performance triplestore that is continuously updated and extended. For data retrieval, it organizes data in Repositories, associates an identifier with each triple, and stores each triple as a quad composed of the values for subject, predicate, object and the graph to which the triple belongs. Furthermore, it uses all combinations of these 4 components to which the identifier is added as default indexes. A query in Allegrograph is first analyzed to determine the indexes that may be involved by the query. The actual indexes used by the query are dynamically identified in a second processing step. Allegrograph also supports reasoning and transaction management. 6.2.5. **NoSQL-Column triplestores** The list of column triplestores comprises among others “Jena-HBase” [34], H2RDF+ [35], CumulusRDF [36] and Rya [37]. The triplestore “Jena-HBase” is built on top of HBase column store and is discussed in the Hadoop-nonnative Big Data triplestores category. H2RDF+ and CumulusRDF use the Cassandra column database. The triplestore Rya uses Accumulo. **7. Centralized versus Distributed Categorization** Various RDF stores have been designed to ensure efficient and scalable RDF query processing in a centralized way. Centralized systems manage the storage and querying on a single node. Hence, their main advantage is that they handle all operations locally. However they face the inconvenience of limited resources due to the using of a single machine. Distributed triplestores use multiple machines for the storage and querying of RDF data. They have therefore the capability of handling large amounts of data. Both categories with their characteristics are presented in the following subsections, respectively. **7.1. Centralized triplestores** Centralized triplestores use a single machine to handle RDF data. The centralization is of course with respect to data storage as well as SPARQL processing. Figure 9 presents the sub-categories of the centralized triplestores category. As their name suggest, the main drawback of centralized triplestores is the lack of scalability and fault tolerance. **7.1.1. Desktop triplestores** With desktop triplestores we mean those RDF management systems that run on single desktop machine such as RDF-3X and Hexastore. Hexastore combines the relational vertical representation approach with indexing capability to ensure fast querying of RDF triples. Indeed it uses each possible combination of the components “subject”, “predicate” and “object” for indexing. **7.1.2. Mobile triplestores** This category of RDF stores consists of course of triplestores built especially for managing RDF data in mobile devices such as RDF-on-the-Go [38]. The flexibility and simplicity of the RDF data model make it as a good candidate for data interaction within and between such mobile devices. RDF-on-the-Go is a full-edged RDF storage system that allows RDF storage and SPARQL query processing for mobile devices. RDF-on-the-Go relies on Jena and the Semantic Web Toolkit ARQ. It stores triples using the Berkeley DB. Its indexing strategy is based on the use of R-Trees. **7.2. Distributed RDF Triplestores** Distributed triplestores are of course those systems that use more than one node to manage RDF data. The distribution concerns either the task of storage alone, the task of query processing alone or both tasks. Data distribution needs choosing efficient RDF data partition strategies that are also in accordance with the data retrieval modes chosen for querying the RDF data in order to achieve rapid RDF data manipulations. Issues involved are mainly related to data partition, data exchange between nodes, processing load partition and failure handling. The speed of RDF data processing is mainly influenced by such issues. The strategies to address such issues have to be well chosen to better control the communications between nodes which can lead to high costs and to minimize data processing times within single nodes. Figure 10 illustrates the distributed categories of triplestores. 7.2.1. Native-distributed triplestores Native-distributed systems are considered here with respect to the distribution only and are those triplestores that come with their own distributing approaches for both storage distribution and query processing distribution. 7.2.1.1. Master-Slave native-distributed RDF stores This category is composed of those triplestores that are built independently of any data management already existing solution and follow the master-slave distribution principle where RDF data management is in control of a master node that distribute management tasks to slave nodes. Examples of such triplestores are Virtuoso Cluster Edition, OWLIM [39], YARS2 [40], TriAD (Triple Asynchronous Distributed) [41]. OWLIM with its variants is developed with the java programming language and is a native RDF store. Its variant SwiftOWLIM is rather a memory based centralized triplestore. Its cluster version BigOWLIM is distributed and contrary to other distributed triplestores handles deletion and insertion of RDF data more efficiently with the help of its indexing and partitioning strategy. It is currently developed under the new name of GraphDB (http://graphdb.ontotext.com) which also belongs to the category of cloud triplestores. YARS2 is a native distributed RDF store. It proposes distributed indexing methods and three forms of indexes: Keyword index, six quad indexes and Join indexes. TriAD also uses a classical master-slave architecture with a direct communication through the asynchronous exchange of messages. TriAD uses METIS graph partitioning with respect to subject and objects and associated combinations of indexes. Queries in TriAD are optimized using a summary graph that takes in consideration the result of the partitioning in order to execute queries directly only on concerned parts of the RDF graph. 7.2.1.2. P2P triplestores P2P (peer to peer) defines a distributed model for a network of computers in which computers, also called nodes, play an equal autonomous role with regards to responsibility in the network and share their resources with the other nodes. In a P2P system, there is no single master node for managing the distribution traffic between the nodes. Computing services, data management and networking are offered in a decentralized way and are therefore not controlled centrally like in master-slaves networks. Beyond this decentralization, both fault-tolerance and scalability are the main advantages of P2P systems. Examples of P2P based RDF data management systems are RDFPeers [42], Atlas [43], Edutella [44], RDFCube [45], GridVine [46] and UniStore [47]. The main problem faced by P2P triplestores is how to get a balanced distribution of RDF data between nodes for an efficient retrieval and querying of data and in order to avoid that some of nodes get heavily loaded with data more than other nodes. Hashing is a common indexing solution that is used for distributing and tracking RDF data. Triplestores do however differ in their adopted hashing strategies. The hashing does of course guide the distribution but dependently of the used hashing method it however may lead to imbalances of load between nodes. In this case, the strategy is generally completed with a local split procedure at each node to achieve a uniform distribution of RDF data and therefore to a balanced querying of the RDF data. Once exceeding a threshold of stored data a node launches its split procedure to achieve a uniform data distribution. Another crucial task for a P2P store is the maintenance of the hashing information during the processes of data suppression, update and insertion. Apart from this burden caused by hashing tasks, generally speaking P2P triplestores beyond scalability show robustness with respect to fault tolerance and the advantage of not being centralized controlled. 7.2.2. Nonnative-distributed triplestores The nonnative-distributed triplestores, as the name suggests, rely on existing distribution frameworks for the processing of RDF data. On one hand, we have those triplestores that use cloud solutions that are presented next, and on the other hand, we have triplestores that are relying on Big Data frameworks which are presented in section 8. 7.2.2.1. Cloud triplestores During the last years, cloud computing has acquired more interest by users due to its flexibility, costs and availability of computing resources. Indeed numerous cloud computing providers have evolved and are offering numerous computing software and making available powerful machines to users. Furthermore, cloud computing has many advantages such as hiding from users all the complexity of distribution and handling of problems related to fault tolerance or others. Within the framework of RDF data management, numerous triplestores relying on cloud solutions have also been developed. Among these we have GraphDB (http://graphdb.ontotext.com), AMADA, H2RDF [48], Rya, Stratumstore [49] and DiploCloud [50]. GraphDB is an RDF database system that runs on the AWS cloud. It provides easy on-demand access for semantic metadata. DiploCloud represents an RDF graph is generated on three main structures, namely RDF molecule clusters, template list and key index. 7.2.2.2. Nonnative-Distributed graph triplestores This category is constituted of those RDF systems that use graph oriented solutions for RDF data management in a distributed scenario. Acacia-RDF [51] is an example of such triplestores. It also has implementation of various algorithms for handling graphs. Furthermore, it can also be run on a single node. Acacia-RDF relies on the graph database solution Acacia and is programmed in the language X10. It can also be used as a centralized triplestore. 8. Big Data Triplestores In recent years, various solutions have emerged for the processing of huge amounts of data with the use of clusters made up simply by commodity computers. Such solutions are also offering programming tools for accessing and processing the large data scattered in their distributed file systems based on well-defined frameworks. The categorization of Big Data triplestores we are giving here is made with respect to the Big Data processing solution used by each one of these triplestores. More precisely, we distinguish between those triplestores that are based on Hadoop, Spark or Flink. The adoption of each of these systems by a triplestore will be clarified in the associated category subsection taking of course the characteristics of the system considered. The sub-categories within the Big Data category are presented in the following subsections and are illustrated in Figure 11. 8.1. Hadoop triplestores The Hadoop triplestores are triplestores that are built on Hadoop HDFS (Hadoop Distributed File System) and Hadoop MapReduce programming framework for the storage and processing of RDF data. In the following subsections we give the associated subcategories and highlight the main principles on which the storage structure and querying are based. Three subcategories are identified taken into account if they are relying on a direct use of HDFS and MapReduce completely or only partly with the use of other intermediary solutions. 8.1.1. Hadoop-native triplestores Native HDFS-MapReduce triplestores are not relying on any already existing solution that uses Hadoop either for storing or querying data. They are built from scratch for the use of the HDFS file system to store RDF graphs and MapReduce for execution of SPARQL queries. In the category of Hadoop-native triplestores we have SHARD [52], HadoopRDF [53] and CliqueSquare. HadoopRDF stores RDF data triples into HDFS based on a predicate-oriented partitioning and performs decomposes queries respectively in MapReduce jobs. It keeps as many joins as possible in each job to reduce the number of jobs. This strategy can lead to high time costs especially when the value of predicates are unknown and multiple files have to be uploaded in this case to process queries. SHARD is also a Hadoop-triplestore that distinguish itself by the subject oriented RDF data storage and an iterative query processing which is also subject-oriented. For each subject it stores all its triples with their predicates and objects in one line. For processing a query, it creates a pattern matching job for each triple pattern in the query and executes a join with the result computed up to this job. This strategy leads of course to enormous running times. 8.1.2. Hadoop-nonnative triplestores With the category of Hadoop-nonnative triplestores, we mean those triplestores that directly use other existing HDFS/MapReduce general data management solutions for the handling of RDF data. Examples of RDF stores in this category are PigSPARQL [54] and RAPID+ [55, 56], “Jena-HBase” and “Hive+HBase” [57]. The triplestore “Hive+HBase”, for example, uses functionalities of HBase that uses HDFS for managing data and Hive that also offers a data warehousing module. The reliance of Hadoop-nonnative triplestores on other existing Hadoop data storage and processing existing solutions is an advantage of such triplestores since such solutions are for use in a general context and offer therefore to the triplestores possible ways for further development with components related for example to integration of other data sources and for incorporating other functionalities related to data analytics and also to transaction management. However, the major drawback for both Hadoop-native and Hadoop non-native triplestores is the high communication costs because of unavoidable disk input and output operations during the execution of the task of MapReduce jobs phases when dealing with massive RDF data. In the case of Hadoop-nonnative triplestores, the translation of SPARQL queries to the query languages of the engines on these triplestores rely also adds extra costs. 8.2. Spark based triplestores Spark’s solution is based on storing processed data and intermediate results in main memories of computing nodes and keeping a history of the computations for recovering lost data in case of failures. This let Spark enhances speed since the switching to disk is not frequent as it is in the case for Hadoop MapReduce executions. At the base of computation, Spark uses the so called Resilient Distributed Datasets (RDDs) which are collections of data partitioned into chunks distributed on the computing nodes and kept as much as possible in main memory. Such RDDs are represented as Java objects. Spark also provides a module for SQL. SQL querying is done on RDDs which enables fast querying through the parallel computation offered by Spark across the nodes while benefiting from the use of memory to store RDDs. SQL querying on external data like Hive data is also done by loading such data into Spark RDDs. Spark is used by the triplestores SPARQLGX [58], S2RDF [59], SPARQL-Spark [60], PRoST [61], TripleRush [62] and Presto-RDF [63]. The triplestore S2RDF (“SPARQL on Spark for RDF”) tries to minimize times of query processing by reducing the amounts of data to keep in memory. For this, it uses a schema for RDF data that extends the predicate-oriented partitioning schema already presented in subsection 6.1.1.3 with additional pre-computed tables. The main idea behind this schema is to reduce the size of data to be loaded into memory when dealing with joins within the queries to be processed. This has the advantage of avoiding input-output hard disk operations since spark keeps data in memory for programs execution. For two distinct predicates tables T1 and T2, S2RDF pre-computes and stores into HDFDS three semi-join tables of those (s,o) pairs of T1 for which, respectively, s is a subject in the second, o is a subject in T2 and s is an object in T2. A limitation of S2RDF is the need for additional functionalities for the automatic launching of an efficient updating of the semi-join tables each time a deletion of some existing triples or an insertion of new ones happen. With regards to the aforementioned characteristics of Spark based triplestores, such triplestores have the advantage over Hadoop ones of largely reducing RDF data processing costs since the input-output operations are largely reduced due to the fact that RDF data and intermediary data is mainly kept partitioned in memories of processing nodes during the processing stages. 8.3. Flink based triplestores Flink is natively developed for data streaming and offers massive real time streaming functionalities. It also offers APIs for data mining operations on streams. Flink can be considered as a Big Data engine for event streaming while Spark can be considered as a Big Data engine for micro-batch streaming. <table> <thead> <tr> <th>Libraries &amp; APIs</th> <th>Python API</th> <th>Table API</th> <th>FlinkML</th> <th>...</th> <th>Gelly</th> </tr> </thead> <tbody> <tr> <td><strong>Kernel</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Distribution</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td><strong>Deployment</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Centralized</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Cluster</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Cloud</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td><strong>Storage</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Centralized</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Cluster</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Cloud</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Figure 12: Flink architecture Flink is developed in Java and Scala and provides an API for the processing of graphs called Gelly. Components of Flink are presented in Figure 12. FLINKer [64] is an example of a triplestore that is based on Flink and provides therefore RDF data streaming. In FLINKer, Gelly graphs are built from RDF triples and then loaded in the Flink system to be handled. This graph representation is a strong positive point of FLINKer since it allows easy graph partitioning and distribution of data processing among nodes. For RDF data querying FLINKer uses Flink data processing operators on Gelly graphs to generate query optimization plans based on Flink parallelization contract programming approach (PACT) for a parallel execution. Though these advantages of FLINKer, it still needs adding some functionality for more user involvement with regards to possible extensions of FLINKer with APIs for data representation based on Gelly graphs and for data analytics purposes. Also, FLINKer lacks elements for transaction management. 9. Stores for Constrained Devices Micro computing has made it possible to integrate programmable modules with memories for data storage in devices with reduced capacity. Various devices with such modules have been developed in recent years for various applications (edge devices, sensors, etc.). The integration of RDF processing systems has also become possible for such small peripherals despite their limited memory capacity. In the category of triplestores for constrained devices we have µRDF store [65], RDF4Led [66] and Wiselib [67]. The µRDF store was developed with the aim to make the exchange and treatment of RDF data possible in the world of “internet-of-things” (IoT). It was tested for micro-controllers with memories ranging from 8 to 64 kB and with an internet connection. The tests include the storage of RDF data as well as SPARQL querying using basic SPARQL constructs. RDF4Led, on the contrary, addresses RDF data exchange for lightweight edge devices. Such devices are largely common in IoT as well as in Cloud computing. The RDF4Led built-in system comprises a physical storage with an indexing strategy of triples, an intermediary buffering unit and a query engine. Efficiency of RDF4Led has been proven for devices with some hundreds of Mbytes in main memory and with a storage capacity of 16 GBytes. 10. Comparison with Related Works We notice that most existing works concentrate on a specific type of triplestores for reviewing or categorizing triplestores principally with limited characteristics or for comparing query processing times. We principally mention the works in [68] for the case of relational stores, in [69] for NoSQL stores, in [70] for P2P stores and in [71] for Big Data stores. Contrary to these works, our approach comes with a consistent and detailed categorization with a focus on the storage and query processing characteristics. Figure 13 presents the main categories. As already mentioned, some of existing triplestores can be part of several categories. For other issues related to detailed comparison criteria for RDF stores we refer to our work [72]. 11. Conclusion The enormous acceptance of RDF in many fields has led to the development of various triplestores for the management of RDF data with each triplestore exhibiting its own characteristics. The variety of triplestores is of course a result of the variety of application use cases and of the various characteristics of data to be handled. Such characteristics are mainly related to data variety, to the volumes of data and to the data management tools and technologies. In this work we gave an extensive categorization of existing triplestores while identifying, for each established category its associated key features that make it to be treated separately, and presenting its underlying RDF data processing capabilities. We mainly focused on the data processing techniques used by the systems of each category as well as the modes of their deployment for the processing of RDF data and queries. The list of the different categories of triplestores is indeed established according to destination machines if they are for constrained devices, for desktops or for clusters, as well as depending on the technologies on which they are based: relational, non-relational, Cloud, P2P or Big Data. The categorization is also illustrated by reviewing within each category its representative RDF triplestores while highlighting advantages and disadvantages of the technology on which they are based in the context of RDF data characteristics and giving some suggestions for possible extensions. With the given categorization, users will be able to identify the best suited triplestores for their use cases. Also, triplestores designers will be able to adequately focus on the relevant features to consider for the challenging task of design and development of RDF stores or to identify possible extensions of existing stores dependently on the targeted data management types and the tools at hand. Conflict of Interest The authors declare no conflict of interest. 12. References
{"Source-Url": "https://www.astesj.com/publications/ASTESJ_060225.pdf", "len_cl100k_base": 10916, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 48997, "total-output-tokens": 14241, "length": "2e13", "weborganizer": {"__label__adult": 0.0003483295440673828, "__label__art_design": 0.0006732940673828125, "__label__crime_law": 0.0005202293395996094, "__label__education_jobs": 0.0030117034912109375, "__label__entertainment": 0.0002005100250244141, "__label__fashion_beauty": 0.00023043155670166016, "__label__finance_business": 0.0009055137634277344, "__label__food_dining": 0.0003952980041503906, "__label__games": 0.0007176399230957031, "__label__hardware": 0.001186370849609375, "__label__health": 0.0007500648498535156, "__label__history": 0.0005869865417480469, "__label__home_hobbies": 0.0001533031463623047, "__label__industrial": 0.0006422996520996094, "__label__literature": 0.0008111000061035156, "__label__politics": 0.0003974437713623047, "__label__religion": 0.0006079673767089844, "__label__science_tech": 0.4521484375, "__label__social_life": 0.00021851062774658203, "__label__software": 0.05682373046875, "__label__software_dev": 0.4775390625, "__label__sports_fitness": 0.0002028942108154297, "__label__transportation": 0.0005750656127929688, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59126, 0.04119]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59126, 0.43397]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59126, 0.89659]], "google_gemma-3-12b-it_contains_pii": [[0, 4596, false], [4596, 9322, null], [9322, 13237, null], [13237, 17385, null], [17385, 23894, null], [23894, 29153, null], [29153, 33411, null], [33411, 39476, null], [39476, 44571, null], [44571, 50099, null], [50099, 58384, null], [58384, 58384, null], [58384, 59126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4596, true], [4596, 9322, null], [9322, 13237, null], [13237, 17385, null], [17385, 23894, null], [23894, 29153, null], [29153, 33411, null], [33411, 39476, null], [39476, 44571, null], [44571, 50099, null], [50099, 58384, null], [58384, 58384, null], [58384, 59126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59126, null]], "pdf_page_numbers": [[0, 4596, 1], [4596, 9322, 2], [9322, 13237, 3], [13237, 17385, 4], [17385, 23894, 5], [23894, 29153, 6], [29153, 33411, 7], [33411, 39476, 8], [39476, 44571, 9], [44571, 50099, 10], [50099, 58384, 11], [58384, 58384, 12], [58384, 59126, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59126, 0.04724]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
b5ea1cc6cb07398f33478824a6e74a7ee040f350
Efficient Tiled Sparse Matrix Multiplication through Matrix Signatures Süreyya Emre, Aravind Sukumaran-Rajam, Fabrice Rastello, Ponnuswamy Sadayyapan To cite this version: HAL Id: hal-03117491 https://inria.hal.science/hal-03117491 Submitted on 21 Jan 2021 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Efficient Tiled Sparse Matrix Multiplication through Matrix Signatures Abstract—Tiling is a key technique to reduce data movement in matrix computations. While tiling is well understood and widely used for dense matrix/tensor computations, effective tiling of sparse matrix computations remains a challenging problem. This paper proposes a novel method to efficiently summarize the impact of the sparsity structure of a matrix on achievable data reuse as a one-dimensional signature, which is then used to build an analytical cost model for tile size optimization for sparse matrix computations. The proposed model-driven approach to sparse tiling is evaluated on two key sparse matrix kernels: Sparse Matrix - Dense Matrix Multiplication (SpMM) and Sampled Dense-Dense Matrix Multiplication (SDDMM). Experimental results demonstrate that model-based tiled SpMM and SDDMM achieve high performance relative to the current state-of-the-art. Index Terms—spare matrix signature, sparse tiling, SpMM, SpMDM, Sparse Dense Matrix Multiplication, Multi-core I. INTRODUCTION Sparse Matrix Multi-vector multiplication (SpMM, also sometimes called Sparse-Matrix Dense-Matrix Multiplication, or SpMDM) and Sampled Dense Dense Matrix Multiplication (SDDMM) are important kernels used in many domains like Fluid Dynamics, Data Analytics, Economic Modelling, and Machine Learning [15], [16]. In areas like Machine Learning and Artificial Neural Networks, these kernels are used iteratively over and over again, therefore optimized implementations are important for many software frameworks like Tensorflow [2] and PyTorch [29]. Several recent efforts have sought to exploit sparsity in deep learning, using an SpMM formulation [11], [17], [22]. Examples of the use of SpMM from numerical simulation include the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method for finding eigenvalues of a matrix [4], [21], and iterative solvers with multiple right-hand sides, like the Krylov subspace iterative solvers that use SpMV at their core. Sampled Dense-Dense Matrix Multiplication (SDDMM) is a kernel that can be used as a core operation in an efficient formulation of factorization algorithms in machine learning, such as Alternating Least Squares (ALS) [19], Sparse Factor Analysis (SFA) [20] and topic modeling algorithms like Latent Dirichlet Allocation (LDA) [5] and Gamma Poisson (GaP) [37], as describe in detail by Zhao and Canny [8]. A. The challenge of tiling sparse matrix multiplication The cost of performing arithmetic/logic operations on current processors is significantly lower than the cost of moving data from memory to the ALU (arithmetic/logic units), whether measured in terms of latency or throughput. The reduction of data movement between nodes of a parallel computing system, as well as within the memory hierarchy of each node, is critical for achieving high performance. Therefore data locality optimization is of fundamental importance. Tiling is a key technique for optimizing data movement in dense matrix computations like matrix-matrix multiplication, LU decomposition, and Cholesky factorization. Tiling can enable significant data reuse in levels of cache, so that the limited main-memory bandwidth does not constrain performance. While techniques for tiling of regular computations using dense arrays are well understood and have been incorporated into compilers [6], [26], [27], [39], data locality optimization for parallel irregular sparse computations remains a significant challenge. A number of research efforts have sought to develop compile-time analysis and transformation techniques for sparse computations [24], [30], [31], [36], [38] but the current state of knowledge and tools are quite far from being able to make a practical impact on optimizing SpMM to exceed performance of available implementations in libraries like Intel’s MKL. A key issue with tiled parallel matrix computations is that of tile size selection, since the choice of tile sizes has a significant impact on performance. If the chosen tile sizes are too small, the cache is under-utilized and sub-optimal data reuse in cache results in high volume of expensive data movement from memory. If the tile sizes are too big, the cache capacity is insufficient to retain the data for reuse within the tile, resulting in excessive cache misses and high volume of data movement from memory. The issue of analytical modeling for effective tile-size selection for dense matrix computations has been the subject of significant research [9], [32], [35], [42]. However, to the best of our knowledge, no prior work has developed a model-driven approach for selection of tile sizes for any sparse matrix computation. While some recent work [12], [13] has focused on tiled sparse matrix computations, the choice of tile sizes was done using empirical heuristics and auto-tuning over a space of choices. B. Sparse-matrix signatures for model-guided tiling A significant challenge in modeling the impact of tile size on data movement with sparse matrix computations is that the amount of data movement depends not only on the number of non-zero elements but on pattern of non-zeros. We use a simple example to illustrate this. We first measured the volume of data movement and achieved performance using the SpMM implementation in Intel’s Math Kernel Library (MKL) for forming the product of a $100K \times 100K$ banded sparse matrix (band-size of 97) and a $100K \times 128$ dense matrix. The data volume to/from main memory was measured to be $4 \times 10^8$ bytes and achieved performance was 20 GFLOPS. The non-zero elements of the banded sparse matrix were then randomly reordered by performing a randomized row/column permutation. So the total number of arithmetic operations remains unchanged, but the measured data volume from memory shot up to $10^{11}$ bytes and performance dropped to 0.5 GFLOPS. Thus, the 2D distribution of non-zeros of a sparse matrix can have a significant impact on performance. The question we address in this paper is: Can the impact of the 2D pattern of non-zero elements on performance of sparse matrix computations be captured in some form that can be effectively used to perform model-driven data-locality optimization for such computations? In this paper, we address this problem and develop a novel approach to enable effective model-driven tiled execution of two important sparse-matrix primitives – SpMM and SDDMM. The key to the modeling approach is to express data movement volume in terms of a compact one dimensional function signature for a sparse matrix that succinctly captures the impact of the non-zero structure of the matrix on data movement for the class of computations. The compact signature is then used to perform model-driven tile-size optimization. We also present a novel algorithm to efficiently generate the sparse-matrix signature. We demonstrate the utility of the new abstractions by developing new implementations of tiled SpMM and SDDMM for multicore/manycore processors. The key contributions are as follows: - We relate data movement requirements for sparse matrix computations to a sparse-matrix locality-signature that can be pre-computed once and reused for optimizing SpMM and SDDMM for execution on different target platforms. - We develop a novel algorithm for efficient computation of the sparse-matrix locality-signature. - We develop efficient model-guided tiled SpMM and SDDMM implementations based on use of the summarized sparse-matrix signatures. II. BACKGROUND AND OVERVIEW OF APPROACH In this section, we first provide some background on two sparse matrix computations of focus in this work, SpMM and SDDMM, and then present a high-level overview of key ideas behind the approach we develop for model-driven tiling of these sparse matrix computations. Algorithm 1: Sequential SpMM | input | CSR S[M][N], dense A[M][K], dense B[N][K] | | output | float O[M][K] | | i = 0 \text{ to } M-1 | | for j = S.rowptr[i] to S.rowptr[i+1]-1 | | for k = 0 \text{ to } K-1 | Algorithm 2: Sequential SDDMM | input | CSR S[M][N], dense A[M][K], dense B[N][K] | | output | CSR P[M][N] | | i = 0 \text{ to } M-1 | | for j = S.rowptr[i] to S.rowptr[i+1]-1 | | for k = 0 \text{ to } K-1 | | P[i][j] *= A[i][k] * B[S.colidx[j]][k] | | i = 0 \text{ to } M-1 | | for j = S.rowptr[i] to S.rowptr[i+1]-1 | | P[i][j] *= S.values[j] | The most commonly used sparse matrix representation is the Compressed Sparse Row (CSR) representation [33]. In CSR, three 1D arrays are maintained: rowptr, colidx and values. The $i$-th entry of the rowptr array represents an offset to the start of a compacted set of entries in the values and colidx arrays that hold the numerical values and column-indices, respectively, of the non-zero elements in row $i$. SpMM: Sparse Matrix-Matrix product (also called Sparse Matrix-Multivector product) multiplies an input $M \times N$ sparse matrix $S$ and an input $N \times K$ dense matrix $I$ to produce a dense $M \times K$ matrix $O$, i.e., $O = S T$. Algorithm 1 shows the corresponding pseudocode. The outer $i$ loop traverses all rows of $S$, and for each row, the $j$ loop accesses the non-zero elements from the CSR representation of $S$ and multiplies it with all elements (inner $k$ loop) from the appropriate row of $I$ (corresponding to the column index of the nonzero element of $S$) to accumulate to the elements of row $i$ of $O$. SDDMM: SDDMM computes the product of two dense matrices ($A$ and $B$) and the result matrix is then subjected to Hadamard product (pointwise multiplication) with a sparse matrix $S$, i.e. $P = S \odot AB$. Algorithm 2 presents the SDDMM pseudocode. The outer most loop $i$ in the first loop nest, iterates over all the rows of $P$ and the $j$ loop identifies the non-zero elements from the CSR representation of $P$. For each non-zero $P$ element, a $K$ way dot product of the corresponding row of $A$ and a column of $B$ are accumulated to $P$. The second loop nest performs an element-wise multiplication of the sparse matrix $P$ by $S$ and the result is stored in $P$. These two sparse matrix computations have been the subject of several prior optimization efforts [1], [4], [7], [18], [28], [40], [41], but these efforts have not highlighted any relationship between these two computations that could enable the application of some common optimization strategies across the two codes. A key insight driving our work in this paper is that both computations can be viewed as instances of a common pattern with respect to data locality considerations. We present the SpMM and SDDMM computations below in... Algorithm 3: Sequential SpMM Abstracted \[ \begin{array}{ll} \text{input} & : \text{Sparse } S[M][N], \text{ Dense } I[N][K] \\ \text{output} & : \text{Dense } O[M][K] \\ 1 & \text{for } i = 0 \text{ to } M-1 \text{ do} \\ 2 & \quad \text{for } j \mid S[i][j] \neq 0 \text{ do} \\ 3 & \quad \quad \text{for } k = 0 \to K-1 \text{ do} \\ 4 & \quad \quad \quad O[i][k] \leftarrow S[i][j] \times I[j][k] \\ \end{array} \] Algorithm 4: Sequential SDDMM Abstracted \[ \begin{array}{ll} \text{input} & : \text{Sparse } S[M][N], \text{ Dense } A[M][K], \text{ Dense } B[N][K] \\ \text{output} & : \text{Sparse } P[M][N] \\ 1 & \text{for } i = 0 \text{ to } M-1 \text{ do} \\ 2 & \quad \text{for } j \mid S[i][j] \neq 0 \text{ do} \\ 3 & \quad \quad \text{for } k = 0 \to K-1 \text{ do} \\ 4 & \quad \quad \quad P[i][j] \leftarrow A[i][k] \times B[j][k] \\ 5 & \text{for } i = 0 \text{ to } M-1 \text{ do} \\ 6 & \quad \text{for } j \mid S[i][j] \neq 0 \text{ do} \\ 7 & \quad \quad P[i][j] \leftarrow S[i][j] \\ \end{array} \] In this section, we summarize prior research on optimizing SpMM and SDDMM. a) Taco: Taco [18] is a C++ library which uses compiler techniques to generate kernels for tensor algebra operations. These operations can be for sparse or dense tensors having any possible dimensions. The kernels generated are already optimized and use OpenMP parallel pragma to parallelize. This library and its online code generation tool can be used to generate SpMM kernel, where all the tensors are 2D. b) Intel Math Kernel Library: Intel MKL is one of the most commonly used BLAS and Sparse BLAS libraries for CPU’s. This library has highly optimized kernels for many sparse BLAS operations like SpMM, SpMV and SpGEMM. MKL supports various matrix representation like CSR, CSC, COO, etc. MKL library also supports AVX512 instructions and has kernels optimized especially for Xeon Phi architecture which results in significant performance gains [14]. c) Compressed Sparse Blocks based SpMM: Compressed Sparse Blocks (CSB) is a sparse matrix storage format which partitions and stores the matrices in smaller square blocks. This representation does not require any extra space than the commonly used CSR or CSC representations. Using CSB format for SpMM kernels shows significant improvement in SpMM as well as for SpMM transpose [3]. d) Data Locality Optimization for SpMM: Some recent efforts have addressed data-locality optimization for SpMM and SDDMM [12], [13], [25]. However, a significant difference between the developments we present in this paper and previous efforts is that of analytical modeling and compact characterization of sparse matrix signatures for such analytical modeling and tile size selection. These previous efforts have used empirical means to select tile sizes and the main focus has been to reorder the sparse matrix elements into highly clustered regions and use two different kernels to process non-zeros in heavily populated blocks versus sparsely populated blocks. We do not consider any sparse-matrix reordering or the use of different kernel execution strategies based on local non-zero density as done by these efforts. In contrast, our focus is on a new direction that can facilitate analytical modeling and optimization for sparse matrix computations like SpMM. We believe there are opportunities to combine ideas from these previous efforts with the matrix-signature based analytical modeling approach we develop in this paper. e) Inspector/Executor Compiler Optimization: Inspector-executor strategies represent a promising direction, where a one-time execution of an inspector code that analyzes the specific non-zero structure of the sparse matrix can suitably enable efficient execution of the executor code that performs the intended sparse-matrix computation. But the development of effective inspector-executor strategies for arbitrary sparse-matrix computations remains a significant open challenge. While the optimization strategy for sparse-matrix computations presented in this paper does not directly seek to build an optimizing compiler, we believe that the sparse-matrix signatures we develop here can be used in developing an inspector/executor based optimizing compiler for a class of sparse-matrix computations exhibiting the axis-aligned data reuse property like SpMM and SDDMM. Algorithm 5: Tiled SpMM \begin{algorithm} \textbf{input :} CSR S[M][N], dense I[N][K] \textbf{output:} dense O[M][K] \begin{algorithmic} \State \For {$kk=0$ to $\lfloor (K-1)/T_k \rfloor -1$} \Do \State $kbound = \min((kk+1)*T_k, K-1)$ \EndFor\For {$jj=0$ to $\lfloor N/T_j \rfloor -1$} \Do \For {$ii=0$ to $\lfloor (M-1)/T_i \rfloor$} \Do $\text{ibound = } \min((ii+1)*T_i, M)-1$ \For {$i=ii*T_i$ to $\text{ibound}$} \Do $\text{nnz\_begin = } S.T_j\text\_tile[\text{jj}].\text{rowptr}[i]$ $\text{nnz\_end = } S.T_j\text\_tile[\text{jj}].\text{rowptr}[i+1]-1$ \For {$e=\text{nnz\_begin to nnz\_end}$} \Do $\text{j = } S.\text{colidx}[e]$ \EndFor\EndFor \EndFor \EndFor \EndFor \EndAlgorithm \end{algorithm} IV. DATA MOVEMENT ANALYSIS AND OPTIMIZATION FOR TILED SPMM Tiling is a well known technique to minimize data movement. A key consideration for tiled code is the choice of tile sizes along all tiled dimensions of the iteration space. In general, for a d-dimensional tile, d tile-size parameters must be chosen. The choice of tile size is usually driven by considerations of minimization of data movement and much work has focused on this problem for regular computations on dense arrays. Even for regular computations, optimal tile size selection is extremely hard and the state-of-practice relies heavily on auto-tuning, i.e., empirical search through the space of tile-size configurations, using actual execution of the tiled code on the target platform for different tile-size choices. Auto-tuning is often very time consuming, especially for high-dimensional loop nests because of number of tile-size combinations grows exponentially with the dimensionality of ![Fig. 1: Illustration of data access/reuse pattern for tiled SpMM: innermost tile-loop along I dimension](image) **TABLE I: Definitions of some of the terms** <table> <thead> <tr> <th>Shorthand Notation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>$\text{nars_tile}(T_j)$</td> <td>number of active row segments in a column tile of size $T_j$</td> </tr> <tr> <td>$\text{nacs_tile}(T_i)$</td> <td>number of active column segments in a row tile of size $T_i$</td> </tr> <tr> <td>$\text{nnz_per_row_seg}(T_j)$</td> <td>average nnz per row segment in a column tile of size $T_j$</td> </tr> <tr> <td>$\text{nnz_per_col_seg}(T_i)$</td> <td>average nnz per column segment in a row tile of size $T_i$</td> </tr> <tr> <td>interval[d]</td> <td>a segment of a column in the sparse matrix with length d</td> </tr> <tr> <td>vertical interval</td> <td>a vertical column segment in the matrix with height $T_i$</td> </tr> <tr> <td>non-active interval</td> <td>an interval with no nnz in it</td> </tr> <tr> <td>active interval</td> <td>an interval with at least a nnz in it</td> </tr> <tr> <td>aligned interval</td> <td>an interval that starts at position $k*T_i$ where $k$ is an integer</td> </tr> <tr> <td>unaligned interval</td> <td>an interval that doesn’t start at position $k*T_i$ where $k$ is an integer</td> </tr> <tr> <td>total segments</td> <td>Total number of intervals</td> </tr> </tbody> </table> the loop nest. If 100 possible tile-sizes are considered along each dimension, the total number of cases to consider is $10^6$. We show below how this can be considerably reduced by virtue of a special property that holds for SpMM and SDDMM. A. Tiling with Different Tile Loop Orders Alg. 5 shows high-level pseudocode for a tiled SpMM algorithm using a CSR data representation for the sparse matrix $S$. There are three tiling loops ($ii$, $jj$, $kk$), and three intra-tile loops ($i$, $e$, $k$). While efficient access to the elements of the sparse matrix $S$ in CSR representation does impose some constraints on loop ordering, the inherent data dependencies of SpMM and SDDMM permit all possible permutations within the 3 tile-loops and among the 3 intra-tile loops. Focusing on only the tile-loops, 6 permutations are possible, of which a permutation with $ee$ innermost is shown in Alg. 5. Fig. 1 illustrates the data access/reuse characteristics for tiled execution with this tile-loop order. The figure shows the data accessed by a single tile of size $T_k \times T_j \times T_i$ for tile-loop order $<kk,jj,ii>$. Seven non-zero elements are shown in $S$, within the index space along $<i,j>$ covered by the tile. Each nonzero $S_{i,j}$ causes access to a slice of $T_k$ contiguous data elements in row $j$ of $I$ to be read and a slice of $T_k$ contiguous data elements in row $i$ of $O$ to be modified. If two non-zero elements in the tile foot-print of $S$ have the same column-index $j$, they both will need access to the same elements in the input matrix $I$, thereby enabling intra-tile reuse for those elements in $I$. Similarly, elements of $S$ that have the same row index $i$ will enable reuse of elements of $O$ in cache. In the example shown in Fig. 1, for processing all 7 non-zero elements, only $3T_k$ elements of $I$ and $4T_k$ elements of $O$ will be accessed and not $7T_k$ elements of $I$ and $O$. Further, as successive tiles are executed along $ii$, while distinct elements of $O$ are accessed in different tiles, any slice of $I$ will only be accessed once for all tiles along the inner-most tile loop $ii$ (assuming $T_j \times T_k$ is not too large to fit in cache). We call the innermost tiling loop of a tiled loop nest as the streaming loop and the corresponding dimension ($M$, $N$ or $K$) as the streaming dimension. This term is used because, as we explain later in this section, it is never beneficial to use a tile size along the inner-most tiling loop to be larger than one. A choice of tile-size of one means that we are traversing the indices along that fastest varying index in a streamed fashion. The streaming choices for SpMM (and their impact) can be explained with the help of the tiled SpMM algorithm shown in Alg. 5. A property of SpMM (and SDDMM) is that each loop iterator is active in the indexing of two out of the three matrices, but is completely absent in the indexing of the third matrix: $i$ is used in indexing into $S$ and $O$ but not $I$; $j$ is used in indexing into $S$ and $I$ but not $O$; $k$ is used in indexing into $I$ and $O$ but not $S$. Thus, exactly the same set of elements in a slice of the non-indexed matrix is accessed by successive tiles as the inner-most tile iterator is varied, since that iterator is absent in the indexing of data elements for that matrix. We refer to that matrix as the stationary matrix. Any of the three matrices ($S$, $I$, $O$) can be chosen to be the stationary matrix, based on the choice of streaming tile dimension: - Streaming along $M(i)$: A tile of $I$ of size $T_j \times T_k$ is kept stationary in fast memory; - Streaming along $N(j)$: A tile of $O$ of size $T_i \times T_k$ is kept stationary in fast memory; - Streaming along $K(k)$: A tile of $S$ of size $T_i \times T_j$ is kept stationary in fast memory. Figure 2 depicts the streaming choices. B. Pruning the search space From the discussion about data access/reuse characteristics of SpMM for Fig. 1, we can see that all data reuses are along iteration-space axes, and each of the axes is a reuse direction for one of the three arrays: i is the reuse direction for the input matrix I, j is the reuse direction for the output matrix O, and k is the reuse direction for the input sparse matrix S. Now, let us consider the change in accessed data within tiles as the innermost tiling loop is traversed. Since that direction is a reuse direction for exactly one of the three arrays, the data footprint of that array will not change as we traverse that innermost tile, while that loop index indexes the other two arrays and therefore those two arrays will have a completely disjoint data footprint from the previous tile’s data footprint. So we can see that for any of the three possible permutations of the tiling loops, one array will have complete reuse, while the other two arrays will have no reuse as we traverse the innermost tiling loop. This means that as the entire innermost tiling loop is traversed, total reuse is achieved for the array which has the innermost loop as the reuse direction, while the other two arrays only achieve as much reuse corresponding to the number of active iteration space instances along the dimensions corresponding to the outer two tiling loops. The direct consequence of this observation is that the tile size does not affect the achieved reuse for one array (which has reuse along the innermost tiling dimension) and only affects reuse for the other two arrays, whose reuse directions correspond to the two outer tiling loops. Two important conclusions from the above analysis are: - The tile size along the inner-most tiling dimension does not affect reuse of any of the arrays but affects the data footprint of multiple arrays. Hence it is best to choose that tile size as one (or a suitable fixed value, if necessary, to enable effective vectorized execution), to minimize its impact on the data footprint of arrays. - The relative order of the outer two tiling loops only has a minor second-order effect on total data movement and therefore can be chosen in any order without much impact on data movement volume. The above two observations enable a significant reduction on tiled configurations to be considered for performance optimization of SpMM. The same is true for SDDMM. If T possible tile sizes are to be considered along each dimension, instead of 3! permutations of tiled loops and \(T^3\) configurations for different combinations of tile sizes (total of \(6T^3\) cases), the above two observations mean that we only need to consider 3 possible innermost tiling loops (streaming cases) and \(T^2\) tile combinations (innermost tile size does not need exploration) for a total of \(3T^2\) cases. Further, the estimation of total data movement for an arbitrary sparse matrix can be efficiently estimated using sparse matrix signatures, whose efficient computation is described in the next section. C. Data movement analysis a) Streaming along M(i): In this scheme the I matrix is kept stationary. Hence the total volume of data moved for I is \(N \times K\). Each S element is represented by a value and an index in CSR format. Each S element is read in once for every \(T_k\) tile. Hence the data movement volume for S is \((2 \times \text{nnz} \times K)/T_k\). A simple over-approximation for the volume of data movement due to O is \((2 \times N)/T_j\) – for each tile of size \(T_j\) each O element is read and written once. However, depending on the sparsity level and sparsity structure, there may be many empty row-segments in a tile of size \(T_j\), in which case the corresponding O elements are not read/written. The total volume of O elements, after accounting for empty rows of S, can be expressed as \((2 \times \text{nars}(T_j) \times K)\) where \(\text{nars}(T_j)\) represents the number of non-empty or active row-segments in all tiles combined, which is a function of \(T_j\). In other words, for every active row-segment, we have to read and write K elements of O. Thus the total volume is \((N + 2 \times \text{nnz}/T_k + 2 \times \text{nars}(T_j)) \times K\). b) Streaming along N(j): This scheme is similar to streaming along M. Here, we keep O stationary, as depicted in Alg. 6. Hence, the total volume of data transferred for O is \(M \times K\). Each S element is bought into memory once for every \(T_k\) tile. Hence, the data movement volume for S is \((2 \times \text{nnz} \times K)/T_k\). Similar to O when streaming along M, the total data transfer volume for I is \(\text{nars}(T_i) \times K\) where \(\text{nars}(T_i)\) represents the number of active column-segments, which is a function of \(T_i\). Thus the total volume is \((M + 2 \times \text{nnz}/T_k + \text{nars}(T_i)) \times K\). c) Streaming along K(k): In this case each S element is only read once and gets full reuse. Hence, the total volume of data transferred for S is \(2 \times \text{nnz}\). Similar to streaming along M, the total volume data volume transferred for O is \((2 \times \text{nars}(T_i) \times K)\). Similar to when streaming along N, the total data transfer volume for I is \((\text{nars}(T_i)) \times K\). Thus the total volume is \(2 \times \text{nnz} + (2 \times \text{nars}(T_j) + \text{nars}(T_i)) \times K\). d) Optimizing Tile Sizes: The analysis for case (a) and case (b) are similar. Here, we present the analysis for case (b). Let \(\rho\) be density of the sparse matrix S, defined as the fraction of nonzero elements to the total number of elements in the matrix. In case (ii), a \(T_i \times T_k\) slice of S, \(T_i \times \rho\) slice of S and \(1 \times T_k\) slice of I are kept in fast memory (cache or scratchpad). Thus the capacity constraint is: \[T_i \times T_k + 2 \times T_i \times \rho + T_k \leq C,\] where \(C\) is capacity of fast memory. The higher \(T_i\) is, the lower the data movement cost for I. Similarly, increasing \(T_k\) lowers the amount of data movement for S. Let \(\text{nnz}_\text{per_col_seg}(T_i)\) be the average number of non-zero elements per active column of size \(T_i\) in S. Then, the total number of elements is: \[\text{nnz} = \text{nars}(T_i) \times \text{nnz}_\text{per_col_seg}(T_i)\] (2) From eq. (2): \[ nacs\_tile(T_i) = \frac{nnz}{nnz\_per\_col\_seg(T_i)} \] (3) Our objective is: \[ \min_{T_k,T_i} \left\{ 2 \times \frac{nnz}{T_k} + nacs\_tile(T_i) \right\} \] subject to the constraint \[ T_i \times T_k + 2 \times T_i \times \rho + T_k \leq C. \] (5) Since \( 2, N \) and \( K \) are constants they can be removed from the minimization objective. Thus the minimization objective from eq. (4) can be re-written as: \[ \min_{T_k,T_i} \left\{ 2 \times \frac{nnz}{T_k} + nacs\_tile(T_i) \right\} \] (6) Equation (3) can be substituted in eq. (6) to obtain: \[ \min_{T_k,T_i} \left\{ \frac{nnz \times \left( 2 \times \frac{1}{T_k} + \frac{1}{nnz\_per\_col\_seg(T_i)} \right)}{T_k} \right\} \] (7) Since \( nnz \) is constant, the minimization objective can be written as \[ \min_{T_k,T_i} \left\{ \frac{2}{T_k} + \frac{1}{nnz\_per\_col\_seg(T_i)} \right\} \] (8) A similar analysis can be done for case (a) to find optimization function: \[ \min_{T_k,T_i} \left\{ \frac{1}{T_k} + \frac{1}{nnz\_per\_row\_seg(T_j)} \right\} \] (9) Note that in eq. 9, the input dense matrix is read only once, while output dense matrix in eq. 8 moved twice (read and write). Therefore, numerator of the second term is halved in eq. 9. A similar analysis can be performed for streaming along the \( k \) (summation) dimension, and is omitted here. The above analysis shows that the analytical estimate of data volume is a function of \( nars\_tile(T_j) \) or \( nacs\_tile(T_i) \). These 1D functions of tile sizes compactly represents the function signatures corresponding to the 2D sparsity structure of a sparse matrix. As per the above analysis, these one-dimensional function signatures of a sparse matrix are sufficient to estimate the total data movement for SpMM. By calculating the estimated data volume for a range of tested values for tile sizes, the best tile size to be used for a given sparse matrix can be estimated. In the next section, we present an efficient algorithm for computing a matrix signature by only making a single pass through the non-zeros of the matrix. The data movement volume difference between methods a and b is \( 2 \times (nars\_tile(T_j) - nacs\_tile(T_i)) \). According to our experiments, this amount is almost always positive, meaning (b) is more memory efficient; therefore, we use streaming along J (J-Stream) for our experiments. Psedocode for J-Stream method is shown in Alg.6, which also shows how coarse-grained shared-memory parallelism is utilized. V. EFFICIENT COMPUTATION OF SPARSE MATRIX SIGNATURE ![Fig. 3: Examples of defined concepts.](image) In this section, we present a novel algorithm for efficient estimation of the sparse matrix signatures. The naive generation of the signature for a matrix involves repeated scanning of the sparse matrix to count the number of active column/row segments as a function of row/column band size. Instead, the algorithm explained below generates the signature using a single pass over the sparse matrix to record a histogram of spacing between successive non-zero elements in a row (column). Assume we have a \( M \times N \) sparse matrix \( \mathcal{A} \). Tiled execution of SpMM or SDDMM corresponds to access of the sparse matrix data in partitioned row bands of height \( T_i \) (column bands of width \( T_j \)). If in a given row band, a column-segment is made-up of only zero elements, then the corresponding segment is said to be non-active. If it contains at least one non-zero element, then it is active. For a given value \( T_i \), a segment \( \mathcal{A}[(i \cdot (i + T_i - 1)] \) is said to be aligned if \( i \mod T_i = 0 \). For a given tiling of height \( T_i \), the matrix is partitioned into... Sparse signature as the proportion of active segments: The idea is to compute, as a function of $T_i$, the ratio of the number of (not necessarily aligned) active segments divided by the total number of (not necessarily aligned) segments (active or not). To do so, we introduce the notion of vertical intervals, a vertical interval simply being a maximal vertical set of consecutive zero elements in $\mathcal{A}$. In other words, a vertical interval is a vertical set of zero elements bracketed by non-zero elements or by matrix boundaries. Figure 3 illustrates these concepts. More formally, if for $-1 \leq i_1 < i_2 \leq M$ and $0 \leq j < N$, we have $$i_1 = -1 \text{ or } \mathcal{A}[i_1][j] \neq 0$$ and $$i_2 = M \text{ or } \mathcal{A}[i_2][j] \neq 0$$ and $$\forall i_1 < i < i_2, \quad \mathcal{A}[i][j] = 0$$ then $\mathcal{A}[(i_1 + 1)...(i_2 - 1)][j]$ is a vertical interval of length $i_2 - i_1 - 1$. Alg. 7 computes the distribution of vertical interval lengths, that is, for each $0 < d \leq M$ intervals$[d]$, the total number of intervals of length $d$. For each column $j$, it simply traverses all the non-zero rows $i$ ($\mathcal{A}[i][j] \neq 0$) in increasing order. The recorded vertical interval is the one between lasti (the previously visited non-zero row) and $i$ (the current visited non-zero row), that is, the vertical interval $\mathcal{A}[(\text{last}i + 1)...(i - 1)]$ of length $d = i - \text{last}i - 1$. Matrix boundaries are represented by virtual non-zero rows $i = -1$ and $i = M$. **Algorithm 7:** Computation of intervals$[d]$: total number of vertical intervals of length $d$ ```plaintext for $j = 0 : N - 1$ do lasti ← -1; for $i \in \{i| \mathcal{A}[i][j] \neq 0\} \cup \{M\}$ in increasing order do $d \leftarrow i - \text{last}i - 1$; intervals$[d]$ ← intervals$[d] + 1$; lasti ← $i$ end for end for ``` The overall approach for computing the sparse matrix signature is to compute the total number of non-active segments. A pertinent property of vertical intervals is that a non-active segment is necessarily included in a vertical interval. More interestingly, we can precisely compute the non-active segments included in a vertical interval of length $d \geq T_i$ as $d - T_i + 1$. As a consequence, we can express the number of non-active segments, as a function of intervals$[d]$ as follow: $$\text{non-active}[T_i] = \sum_{d \geq T_i} \text{intervals}[d] \times (d - T_i + 1) \quad (10)$$ The total number total$[T_i] = N \times (M - T_i + 1)$ of segments decomposes into the active (total$[T_i] - \text{non-active}[T_i]$) and the non-active (\text{non-active}[T_i]) ones. So the proportion $p[T_i]$ of active segments can be simply expressed as $$p[T_i] = \frac{\text{total}[T_i] - \text{non-active}[T_i]}{\text{total}[T_i]} \quad (11)$$ ### Computing nonactive$[T_i]$ in linear time: As one can observe, while Alg. 7 is linear in the number of non-zero elements, eq. (10) that needs to be evaluated for each potential value of $T_i$ leads to a quadratic complexity of $O(M^2)$ if computed naively. To avoid this quadratic complexity, we rewrite eq. (10) using two cumulative distributions, that in turn, as explained later, can be computed linearly. They are notsmaller$[T_i]$ (representing the number of vertical intervals larger than $T_i$): $$\text{notsmaller}[T_i] = \sum_{d \geq T_i} \text{intervals}[d] \quad (12)$$ and notsmallerweighted$[T_i]$ (representing the length-weighted number of vertical intervals larger than $T_i$): $$\text{notsmallerweighted}[T_i] = \sum_{d \geq T_i} d \times \text{intervals}[d] \quad (13)$$ Indeed, by massaging eq. (10), we get $$\text{nonactive}[T_i] = \sum_{d \geq T_i} \text{intervals}[d] \times (d - T_i + 1)$$ $$= \sum_{d \geq T_i} d \times \text{intervals}[d] - (T_i - 1) \times \sum_{d \geq T_i} \text{intervals}[d]$$ $$= \text{notsmallerweighted}[T_i] - (T_i - 1) \times \text{notsmaller}[T_i] \quad (14)$$ We next discuss how to compute those two distributions (notsmaller$[T_i]$ and notsmallerweighted$[T_i]$) in linear time. Alg. 8 performs this task iteratively, by starting from $T_i = M + 1$ and expressing notsmaller$[T_i]$ and notsmallerweighted$[T_i]$ as recurrence equations. Indeed, it may be observed that $$\text{notsmaller}[T_i] = \sum_{d \geq T_i} \text{intervals}[d]$$ $$= \text{intervals}[T_i] + \sum_{d \geq T_i + 1} \text{intervals}[d]$$ $$= \text{intervals}[T_i] + \text{notsmaller}[T_i + 1]$$ and $$\text{notsmallerweighted}[T_i] = \sum_{d \geq T_i} d \times \text{intervals}[d]$$ $$= T_i \times \text{intervals}[T_i] + \sum_{d \geq T_i + 1} d \times \text{intervals}[d]$$ $$= T_i \times \text{intervals}[T_i] + \text{notsmallerweighted}[T_i + 1]$$ Algorithm 8: Linear time computation of nnotsmaller[Ti] and nnotsmallerweighted[Ti] \[ \text{nnotsmaller}[M+1] \leftarrow 0; \\ \text{nnotsmallerweighted}[M+1] \leftarrow 0; \\ \text{for } Ti = M : 1 \text{ do} \\ \quad \text{nnotsmaller}[Ti] \leftarrow \text{intervals}[Ti] + \text{nnotsmaller}[Ti+1]; \\ \quad \text{nnotsmallerweighted}[Ti] \leftarrow \text{Ti} \times \text{intervals}[Ti] + \text{nnotsmallerweighted}[Ti+1]; \\ \] Computing the signature: To summarize, the computation of the signature \( p[T_i] \) begins by computing the number of vertical intervals of length \( d \), for each value of \( 0 \leq d \leq M \), using Alg. 7. The complexity of this pass is linear in the size of the matrix (number of non-zero elements). We then compute the cumulative distributions (with a complexity of \( O(M) \)) using Alg. 8. Those two distributions can then be used to compute the number of non-active segments of length \( T_i \) by directly applying eq. (14). The complexity is again \( O(M) \). Finally, eq. (11) can be used to derive the signature for each value of \( T_i \geq 1 \) (\( O(M) \) complexity). Hence, the overall complexity is linear, and dominated by the initial scan of the matrix performed by Alg. 7. This enables efficient generation of approximations to the sparse matrix signatures as a function of column/row panel size. We find empirically that the fast approximate signatures track the exact signatures very closely. Figure 4 shows the exact and approximate signatures for one of the tested matrices. VI. EXPERIMENTAL EVALUATION In this section, we compare the performance of the model-driven tiled J-Stream implementations of SpMM and SDDMM against state-of-the-art alternatives: (i) Intel’s MKL, (ii) TACO compiler [18], and (iii) Compressed Sparse Blocks (CSB) [3]. The experiments were performed on a dual-socket CPU platform, which has two \( \times \) Intel(R) Xeon(R) CPU E5-2680 v4 (Broadwell architecture 14 cores per socket, clocked at 2.40 GHz and 256KB L2 Cache) processors. We carried out the experimental evaluation using 22 sparse matrices. These datasets were selected based on previous papers that studied sparse matrix multiplication [23], [34]. All the datasets used in the experiments were downloaded from the publicly available SuiteSparse Matrix Collection [10]. The characteristics of the matrices are listed in Table III. Figure 6 compares performance for two different feature (\( K \)) sizes, \( K = 128 \) and \( K = 1024 \). Each run was repeated 100 times, and the median value is reported. The performance of different approaches was normalized with respect to J-Stream. Each bar corresponds to the normalized GFLOPS achieved on a given dataset—the higher the bar, the better the performance. The last group of bars shows the geometric mean of each implementation on the 22 test matrices. The GFLOPS achieved by J-Stream is also shown in these charts. For example, as shown in Figure 6, J-Stream SpMM’s kernel’s performance ranges from 9 GFLOPS to 186 GFLOPS on the Broadwell platform for different matrices. ![Active Column Estimation for hood](image1) **Fig. 4:** Number of active panel segments estimation vs. actual for hood matrix. ![Preprocessing Time Ratio to Single of a Kernel](image2) **Fig. 5:** Ratio of pre-processing to execution time for J-Stream SpMM and SDDMM for \( K = 128 \) and \( K = 1024 \). A. SpMM Performance For \( K = 128 \), J-Stream (using model-selected tile sizes) achieves 9%, 9%, and 50% speed-up over TACO, MKL, and CSB, respectively. MKL and TACO are faster than J-Stream... Fig. 6: Comparison of performance SpMM and SDDMM kernels for $K = 128$ and $K = 1024$. The J-Stream performance bars correspond to model-selected tile-size; the spike in the middle of the J-Stream bar shows minimum and maximum performance over exhaustive search across tile sizes. Preprocessing times are not included for any of the methods. TABLE II: Model-selected tile sizes for the test matrices <table> <thead> <tr> <th>Matrix Name</th> <th>$K=128$</th> <th>$K=1024$</th> </tr> </thead> <tbody> <tr> <td></td> <td>$t_i$</td> <td>$t_k$</td> </tr> <tr> <td>2cubes_sphere</td> <td>512</td> <td>64</td> </tr> <tr> <td>cage12</td> <td>1024</td> <td>32</td> </tr> <tr> <td>cant</td> <td>256</td> <td>128</td> </tr> <tr> <td>consph</td> <td>256</td> <td>128</td> </tr> <tr> <td>cop20k_A</td> <td>256</td> <td>128</td> </tr> <tr> <td>facebook_combined</td> <td>145</td> <td>128</td> </tr> <tr> <td>filter3D</td> <td>1024</td> <td>32</td> </tr> <tr> <td>hood</td> <td>256</td> <td>128</td> </tr> <tr> <td>m133-b3</td> <td>512</td> <td>64</td> </tr> <tr> <td>mac_econ_fwd500</td> <td>1024</td> <td>32</td> </tr> <tr> <td>majorbasis</td> <td>512</td> <td>64</td> </tr> <tr> <td>mc2depi</td> <td>1024</td> <td>32</td> </tr> <tr> <td>offshore</td> <td>1024</td> <td>32</td> </tr> <tr> <td>patents_main</td> <td>512</td> <td>64</td> </tr> <tr> <td>pdb1HYS</td> <td>256</td> <td>128</td> </tr> <tr> <td>poisson3Da</td> <td>483</td> <td>64</td> </tr> <tr> <td>pwtik</td> <td>256</td> <td>128</td> </tr> <tr> <td>rma10</td> <td>256</td> <td>128</td> </tr> <tr> <td>scircuit</td> <td>256</td> <td>128</td> </tr> <tr> <td>shipspec1</td> <td>256</td> <td>128</td> </tr> <tr> <td>webbase-1M</td> <td>256</td> <td>128</td> </tr> <tr> <td>web-BerkStan</td> <td>256</td> <td>128</td> </tr> </tbody> </table> TABLE III: Properties of the test matrices. <table> <thead> <tr> <th>Matrix Name</th> <th>Rows</th> <th>Columns</th> <th>nnz</th> </tr> </thead> <tbody> <tr> <td>2cubes_sphere</td> <td>101,492</td> <td>101,492</td> <td>1,647,264</td> </tr> <tr> <td>cage12</td> <td>130,228</td> <td>130,228</td> <td>2,032,536</td> </tr> <tr> <td>cant</td> <td>62,451</td> <td>62,451</td> <td>6,010,480</td> </tr> <tr> <td>consph</td> <td>83,334</td> <td>83,334</td> <td>6,010,480</td> </tr> <tr> <td>cop20k_A</td> <td>121,192</td> <td>121,192</td> <td>2,624,331</td> </tr> <tr> <td>facebook_combined</td> <td>4,039</td> <td>4,039</td> <td>88,234</td> </tr> <tr> <td>filter3D</td> <td>106,437</td> <td>106,437</td> <td>2,707,179</td> </tr> <tr> <td>hood</td> <td>220,542</td> <td>220,542</td> <td>10,768,436</td> </tr> <tr> <td>m133-b3</td> <td>200,200</td> <td>200,200</td> <td>800,800</td> </tr> <tr> <td>mac_econ_fwd500</td> <td>206,500</td> <td>206,500</td> <td>1,273,389</td> </tr> <tr> <td>majorbasis</td> <td>160,000</td> <td>160,000</td> <td>1,750,416</td> </tr> <tr> <td>mc2depi</td> <td>525,825</td> <td>525,825</td> <td>2,100,225</td> </tr> <tr> <td>offshore</td> <td>259,789</td> <td>259,789</td> <td>4,242,673</td> </tr> <tr> <td>patents_main</td> <td>240,547</td> <td>240,547</td> <td>560,943</td> </tr> <tr> <td>pdb1HYS</td> <td>36,417</td> <td>36,417</td> <td>4,344,765</td> </tr> <tr> <td>poisson3Da</td> <td>13,514</td> <td>13,514</td> <td>352,762</td> </tr> <tr> <td>pwtik</td> <td>217,918</td> <td>217,918</td> <td>11,634,424</td> </tr> <tr> <td>rma10</td> <td>46,835</td> <td>46,835</td> <td>2,374,001</td> </tr> <tr> <td>scircuit</td> <td>170,998</td> <td>170,998</td> <td>958,936</td> </tr> <tr> <td>shipspec1</td> <td>140,874</td> <td>140,874</td> <td>7,813,404</td> </tr> <tr> <td>webbase-1M</td> <td>1,000,005</td> <td>1,000,005</td> <td>3,105,536</td> </tr> <tr> <td>web-BerkStan</td> <td>685,230</td> <td>685,230</td> <td>7,600,595</td> </tr> </tbody> </table> It may be observed that J-Stream SpMM performance drops as the feature size increases from $K = 128$ to $K = 1024$. Our initial analysis shows that this is due to prefetching effects. Consider the case of $K = 128$ and $T_k = 128$. As a row of data from the output dense matrix is updated, prefetching causes data from the next adjacent row of $O$ to be brought into cache, and it is likely to be used soon afterwards. However, in the case where $K = 1024$, the prefetched data corresponds to the tile along $K$, which is only executed after all the rows in the current tile are processed. Hence, the probability of reuse is much lower and thus the efficacy of the prefetcher is decreased. B. SDDMM Performance We compared the performance of the J-Stream SDDMM implementation with TACO’s SDDMM (Figure 6). On average, J-Stream achieved 70%, 52% speedup over TACO for $K=128$ and $K=1024$, respectively. J-Stream was faster than TACO for all cases in the SDDMM tests. The geometric mean of TACO performance increased as $K$ was increased from 128 to 1024, whereas it decreased for J-Stream. This is likely due to the TACO code not being adversely affected by the prefetcher at large $K$ due to its data access pattern, whereas J-Stream is negatively affected by it. C. Model Effectiveness To evaluate the effectiveness of our model (Section V), we compared the performance achieved using the model selected tile-sizes and the performance obtained using exhaustive search across tile sizes. The spikes in the middle of the in J-Stream performance bars in Figure 6 show the minimum and maximum performance achieved over an exhaustive search across tile-sizes. In general, our model performs quite well. For example, for SpMM with $K = 128$, on average the gap between performance using our model and the maximum achievable performance with optimal empirically found tile sizes is around 10%, and it is less than 20% in all instances, except for one case (cage12), where it is 41%. For the SDDMM kernel, on average, the performance gap between use of our model and empirically determined optimal tile size via exhaustive search is around 14%. The performance gap increases to 30% in some cases, where the number of rows is very small for the input. When the number of rows is small, the entire output matrix can fit inside the cache, which eliminates the benefits of tiling. The performance gap between use of our model and use of the optimal tile sizes found by exhaustive auto-tuning is generally not very large. However, the data from the exhaustive searches also shows that there may still be room for improvement in the model and also the development of model-driven auto-tuning, where the model is augmented with a limited amount of execution on the target platform for a selected set of tile sizes guided by the model. D. Preprocessing Overhead Fig. 5 shows the preprocessing time for creation of the matrix signatures and selection of tile sizes, relative to SpMM... and SDDMM kernel execution time. As shown in fig. 5, normalized preprocessing time is quite negligible for \( K=1024 \). On average, the modeling time was 76% and 9% of the time of a single SpMM or SDDMM kernel execution for \( K=128 \) and 1024, respectively. The preprocessing Algorithms 7 and 8 has complexity \( O(nnz) \) and \( O(N) \), respectively, and total complexity of sequential preprocessing is \( O(nnz) \), which we plan to parallelize as part of future work. E. Scalability In order to compare scalability, we ran each SpMM implementation by varying the number of threads from 2 to 28 (the number of physical cores) and compared against the performance of the corresponding single-core run. Fig 7 and 8 show speedup as a function of the number of threads. For all cases except SpMM with \( K=128 \), J-Stream achieves the best scaling result. For the SpMM with \( K=128 \) case, MKL scales better. VII. Conclusion In this paper, we have developed an analytical approach to modeling data movement and tile size optimization for SpMM and SDDMM. The analysis is made possible by generation of compact one-dimensional function signatures that capture the impact of the 2D non-zero distribution pattern of a matrix on data movement for the applicable class of computations. Implementations of parallel tiled SpMM and SDDMM kernels using the model-driven tiling approach demonstrated effectiveness of the developed methodology. VIII. Acknowledgments We thank the reviewers for their valuable feedback. This work was supported in part by the U.S. National Science Foundation through awards 1946752 and 1919122. References
{"Source-Url": "https://inria.hal.science/hal-03117491/file/main.pdf", "len_cl100k_base": 13369, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 57977, "total-output-tokens": 15130, "length": "2e13", "weborganizer": {"__label__adult": 0.0004477500915527344, "__label__art_design": 0.0006127357482910156, "__label__crime_law": 0.0004954338073730469, "__label__education_jobs": 0.0012331008911132812, "__label__entertainment": 0.0001386404037475586, "__label__fashion_beauty": 0.0002551078796386719, "__label__finance_business": 0.0005030632019042969, "__label__food_dining": 0.0004680156707763672, "__label__games": 0.0009126663208007812, "__label__hardware": 0.0027942657470703125, "__label__health": 0.0009565353393554688, "__label__history": 0.0006060600280761719, "__label__home_hobbies": 0.00017499923706054688, "__label__industrial": 0.0011796951293945312, "__label__literature": 0.00034809112548828125, "__label__politics": 0.0004706382751464844, "__label__religion": 0.0008826255798339844, "__label__science_tech": 0.44189453125, "__label__social_life": 0.00012576580047607422, "__label__software": 0.01108551025390625, "__label__software_dev": 0.53271484375, "__label__sports_fitness": 0.0004112720489501953, "__label__transportation": 0.0008921623229980469, "__label__travel": 0.00028395652770996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51248, 0.06446]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51248, 0.2481]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51248, 0.82621]], "google_gemma-3-12b-it_contains_pii": [[0, 1112, false], [1112, 6027, null], [6027, 11835, null], [11835, 14176, null], [14176, 18984, null], [18984, 22854, null], [22854, 29188, null], [29188, 32938, null], [32938, 37659, null], [37659, 41248, null], [41248, 41590, null], [41590, 47500, null], [47500, 51248, null], [51248, 51248, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1112, true], [1112, 6027, null], [6027, 11835, null], [11835, 14176, null], [14176, 18984, null], [18984, 22854, null], [22854, 29188, null], [29188, 32938, null], [32938, 37659, null], [37659, 41248, null], [41248, 41590, null], [41590, 47500, null], [47500, 51248, null], [51248, 51248, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51248, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51248, null]], "pdf_page_numbers": [[0, 1112, 1], [1112, 6027, 2], [6027, 11835, 3], [11835, 14176, 4], [14176, 18984, 5], [18984, 22854, 6], [22854, 29188, 7], [29188, 32938, 8], [32938, 37659, 9], [37659, 41248, 10], [41248, 41590, 11], [41590, 47500, 12], [47500, 51248, 13], [51248, 51248, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51248, 0.25163]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
3ee1d82202344988c95cbc64554eb171cbc1e465
Tool assisted traffic accident causation analysis An action design research approach Bachelor of Science Thesis in Software Engineering and Management ADAM DEBBICHE ANDERS TREPTOW YUWEN HE The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. **Tool assisted traffic accident causation analysis** An action design research approach Adam Debbiche Anders Treptow Yuwen He ©Adam Debbiche, June 2012 ©Anders Treptow, June 2012 ©Yuwen He, June 2012 Examiner: Helena Holmström Olsson University of Gothenburg Chalmers University of Technology Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden June 2012 ABSTRACT The field of accident causation analysis deals with the analysis of data gathered after traffic accidents. The goal is to develop new techniques to prevent future accidents and save more human lives. This paper, through an action design research approach at SAFER, provides a tool that helps in identifying causation patterns from accident data presented in the form of charts. The paper examines different analysis techniques of accident causation data, as well as show how action design research was used in this case. The paper also examines the effects of ADR on the organization as well as the implication of adopting user involvement. 1- INTRODUCTION The amount of data stored today is growing at a high rate and there is no sign of this slowing down anytime soon. It is therefore important to find technological solutions that allow the exploitation of large sets of data. Data mining, i.e. the extraction and discovery of previously unknown yet possibly valuable information from large sets of data, is a new field that is increasingly being used today and has emerged as a major research domain (Nirkhi, 2010). Within data mining, different techniques are used to analyze large sets of data. Nirkhi (2010) argues that artificial neural networks, decision trees and genetic algorithms are among the most popular approaches. Some domains like finances rely more on neural networks to analyze data due to their ability to discover patterns and predict future behavior which assists companies in strategic planning (Zhang and Zhou, 2004). Similarly, domains where graph theory is a common occurrence require data mining approaches related to structural pattern discovery in graphs (Wang et al., 2002). This implies that the characteristics and goals of the domain are applicable when assessing the feasibility of specific implementations of data analysis tools. In the field of accident causation analysis, there exist different techniques of analyzing data gathered at accident scenes. Thus, the feasibility of accident causation analysis tool depends on the analysis approach used and the said domain’s characteristics and goals. To this end, our study focuses on a specific domain of data mining: traffic accident causation data analysis. Our collaborating organization (SAFER) has a history of manually transforming and visualizing chains of events during their accident analysis work. According to Kotter (1995), introducing change in an organization is a long process. Hence, moving from well-established manual processes to automated data mining is sometimes a daunting task in practice. Our focus in this paper is specifically oriented towards the design and development of an automated data analysis tool. Within this specific setting, our study focuses on root cause analysis of actual traffic accidents i.e. pre-crash scenarios. Currently, there exists a formal method for retrieving, classifying and analyzing the data collected at crash sites, and this method is referred to as DREAM (Ljung, 2002). This method suggests the development of charts by looking at multiple viewpoints on the causes of an accident. Each accident can produce multiple charts. At present, the practitioners have access to a database with a large number of charts from different accident cases. They can choose to combine any number of charts (from the same or different accidents) which are then aggregated and presented in the form of a graphical representation known as an aggregated DREAM chart. This is a tedious process in terms of human resources as much time is spent on composing these charts which are essentially necessary in order to properly analyze traffic accident causation. The specific problem is therefore investigating in this study how SAFER, as one specific example of an organization (within the domain of traffic accident analysis) faces challenges of analyzing large sets of data, and that they are currently following repetitive but solid manual analysis tasks. Based on this, our research objective is to: assess the feasibility of an automated computer aided analysis tool for data analysis of traffic accident data through a prototype implementation. To approach this problem in a way that also illustrates how automated data mining tool support may be developed, an action design research (ADR) approach (Sein et al., 2011) will be used. The tool strives to automate the – today manual – chart aggregation process through a theory driven prototype that will automate the charts aggregation process which is considered a practical and theoretical contribution. We also make a methodological contribution by being an early adopter of action design research (ADR). The paper continues with a related literature section where different traffic accident analysis methods are described as well as our theory on how to build a DREAM based tool. After that we introduce our research method (ADR), explain why it was chosen and how it was used. In the data and discussion part, the results are presented and discussed. The paper ends with a conclusion and suggestions for further research. 2- RELATED LITERATURE In this section, we present literature related to our field of study: traffic accident data causation analysis. In section 2.1, different techniques of accident data causation analysis previously developed are presented. In section 2.2, we write about how to build a good DREAM based tool through user involvement. These sections will, together with reflections related to our ADR research method, be central aspects of our discussion later in the paper, and served as guidance to the development work of our data analysis tool. 2.1- TRAFFIC ACCIDENT ANALYSIS METHODS AND TECHNIQUES With regards to different analysis methods of accidents, Otte et al. (2009) presented a method known as Accident Causation Analysis with Seven Steps (ACASS). It allows analyzing and collecting causation factors of traffic accidents. According to the authors, ACASS can appropriately define the human errors of the actors involved in a traffic accident. The method contains a model that allows collecting the important information at an accident scene. An approach to interview people involved in an accident is introduced so that the human causation factors are obtained. This is achieved through an analysis system (in seven steps) which takes into account the chronological order from observation (recognizing the danger) to operation (responding to the danger). Additionally, ACASS groups the accident causation factors into three groups: human factors, technical factors from the vehicle and environment and infrastructure (see figure 1). Each group contains categories which in turn contain more specific criteria that specify the factor within the category (Otte et al., 2009). Also, ACASS allows data collected from a scene to be submitted to a database. ![Figure 1. Structural-analytical view of causes of accidents in the human-vehicle-environment-model (Otte et al., 2009)](image-url) Xi et al. (2010) have developed another accident causation analysis method based on traffic accident information system (in China). Since the method is based on accident data recorded in a database, the authors focus on the characteristics and classification of traffic accident data. They argue that each accident record includes multiple data attributes and that a data attribute is organized according to five aspects: basic information of an accident, information of relevant people, vehicle information, road information and environment information. Consequently, Xi et al. (2010) identify two layers of data attributes in the traffic accident database as seen in figure 2. In their work, they suggest a method that provides quantitative analysis for the contribution of accident analysis data taken from a database. Two formulas are used; the first one (1) calculates the importance of four attributes including people, vehicle, road and environment (Layer 1). The second (2) formula calculates the importance classification of attribute (Layer 2). The result of both formulas (1) and (2) is always between 1 and 4 (1: unimportant, 2: general, 3: important, 4: very important). These results in every classification and attribute being assigned an importance value (1-4). The authors state that the result of the method can only be an important foundation to formulate improved strategy for traffic safety (Xi et al., 2010). Another accident causation method is DREAM, a method first developed by Ljung in 2002 (Ljung et al., 2007). As other analysis methods, it organizes accident data using a classified schema of contributing factors of accidents in a systematic way. DREAM is the adaptation of the Cognitive Reliability and Error Analysis Method (CREAM) (Hollnagel, 1998) with the aim to suit the road traffic domain. The original goal of DREAM was to identify traffic situations for which the development of technical solutions had the potential to prevent future accidents (Warner et al., 2008). It was thus used to guide the analysis process within different types of technical solutions targeting different areas of accident avoidance. Nowadays, the focus of DREAM is however mostly to identify interactive systems for risk avoidance (Warner et al., 2008). After accident investigators collect data from a scene of an accident (through interviews and observations). DREAM is initially used to develop an accident model from the data collected at the scene consisting of the human, vehicle and traffic environment (technology) and the organization (Ljung, 2002). Once the accident model is created, the practitioner uses DREAM’s classification scheme to begin drawing the chart. A DREAM chart is composed of an observable effect - known as a phenotype and contributing factors to the observable effect (genotypes) according to Ljung (2002). The DREAM manual offers a list of all the possible phenotypes and genotypes and how they are linked. A chart is created for each actor involved in a car accident. The goal of DREAM is thus “...to make it possible to systematically classify and store accident causation information which has been gathered through in-depth investigations by providing a structured way of sorting the causes behind the accident into a set of formally defined categories of contributing factors”. (Warner et al. 2008, p.7) The latest version of DREAM is version 3.0 at the time of writing with a newer version planned for release during the first or second quarter of 2012. With regards to the three different methods presented in this section, Table 1 shows the different characteristics of each method: **Table 1. Characteristics of each method reviewed in this paper** <table> <thead> <tr> <th>Methods</th> <th>Characteristics</th> </tr> </thead> </table> | Accident Causation Analysis with Seven Steps (Otte et al., 2009) | ● Proposes an approach to interview people with the goal to extract human causation errors ● Data collected can be entered in a database ● Divides accident causation factors into three different categories | | Accident Causation Analysis Method Based On Traffic Accident Information System (Xi et al., 2010) | ● Based on traffic accident information system (databases) ● Focuses on quantitative analysis (statistical) | | Driving Reliability and Error Analysis Method (Ljung, 2002) | ● Focuses on identifying interactive systems for risk avoidance ● Visualization of accident schema in the form of charts ● Aggregation of multiple charts to discover patterns that cause certain types of accidents ● An organizer of explanations - not a provider | Given the alternatives presented here, we decided to focus on DREAM: According to unpublished internal reports (Björklund et al., 2007), SAFER conducted comparison studies to determine which method should be used in two of its projects. The goal of the first project was to investigate which pre-crash method would be suitable in the Investigation Network and Traffic Accident Techniques (INTACT) project at Chalmers. This led to the exclusion of some methods from the start. Each method was first evaluated by one of the group members involved in the study then discussed within a group. The discussion was based on a set of guiding principles identified in the beginning. At the end of the project, the team presented their recommendations. It was suggested that DREAM should be used along with another accident analysis method called Sequentially Timed Events Plotting (STEP) as they have a great potential of complementing each other and offer a clear description of events and factors leading to an accident. Moreover, DREAM was found to be compatible with the guiding principles: - It offers case and aggregated analysis - Has a theoretically described accident model and a clearly described analysis method - No guilt. The goal is not to determine who committed traffic violations - Several concurrent levels of analysis - Predefined accident factors - Counter measures: the goal of DREAM is to develop counter measures in order to prevent accidents - Can be implemented in a database - Interview with witnesses and drivers is an important part of the data collection procedure In the second project, SAFER also conducted a study (SAFER, 2011) along with other partner organizations to determine which of the following methods were suitable in their “Road Safety Data, Collection, Transfer and Analysis” (DaCoTA) project: DREAM, ACASS and HFF. During six months the different methods were compared by first setting up a coding exercise where each participant in the study coded five examples cases once with each method. Next, each coder filled in a questionnaire to evaluate their experience with each method. After the coding exercise and questionnaire, all SAFER partners were asked to identify their favorite coding system. The results of the coding exercise showed that DREAM had higher conformity (65%). The questionnaire showed that DREAM had the highest conformity as well as the most explanatory manual. The results of stating preferred method showed that most partners preferred DREAM while others wanted to see some elements from ACASS and HFF included in DREAM. The internal report not only concludes that the results are in favor of DREAM but also that the method is supported by the European Commission. However, the report also noted that some changes to DREAM should be made. 2.2 - User involvement as a quality assurance and development strategy As mentioned above, the prototype to be developed is based on the DREAM method. With this in mind, it is necessary to ensure that the development approach conforms to user needs. Previous to our research, the collaborating organization lacked a clear understanding of the exact needs they had or the potential of this implementation. Therefore, relying heavily on user involvement to guarantee the appropriateness of the implementation seems reasonable. User involvement is a popular issue that is currently discussed in the software industry. The reasons for this include the negative feedback from customers about products during/after development, dissatisfaction with the software, cost issues and the instability of marketing (Majid et al., 2010). Majid et al. (2010) place strong emphasis on this as they argue that unsuccessful software products are always based on the unacceptable and faulty design. In our case, the communication with practitioners at SAFER is frequent enough to allow user involvement in a way that is likely to have a positive effect on the design and implementation of the prototype, in particular for capturing requirements and gathering feedback in the iterative development phases. Das (2007) points out that a measure of a successful software product is the degree of the design fulfilling the customer’s requirements. He therefore suggests user involvement to be adopted in the software requirement engineering area. It can be used to help developers identify stakeholders and their needs, and documenting the specifications. Relying on user involvement thus has a positive effect on the success of software development and user satisfaction (Das, 2007). However, several questions remain, including how and why user involvement works in practice. Majid et al. (2010) have conducted a survey on user involvement in software development life cycles. They investigated to which extent users’ involvement should be in the development cycle. Their initial literature studies state that due to the user interaction including information and technology exchange, each phase in software development must pay attention to user involvement to ensure quality (Majid et al., 2010). The result of the survey showed that the requirement analysis stage shares the highest percentage of user involvement, 77.42%, followed by testing and deployment stage with 64.52%. The involvement percentages of project selection and planning stage, as well as system design stage were less than the first two ones, 54.84% and 35.48% respectively. The development stage came in last with only 16.13% in total. Also, the result showed that the involvement of users focused more on the functional requirements rather than non-functional requirements. Thus, they drew the conclusion that the degree of user involvement varies at each stage of the development life cycle and software engineers should focus on real users’ need in the overall software lifecycle (Majid et al., 2010). Heiskari and Lehtola (2009) present a case study in a company producing software solutions to investigate the state of user involvement in practice. They point out that there are several risks and challenges when having users involvement in the development process. For example in agile methods, users are encouraged to participate with developers, but the main focus of agile methods is to deliver a product instead of being user-centered (Heiskari and Lehtola, 2009). Thus, the goal of the case study is to provide effective and efficient way to adopt user involvement by understanding how different departments which have different functions in the organization involve users in practice. Semi-structured interviews were conducted with various people in different departments and recordings of interviews were translated into textual descriptions. The authors present several challenges found in practice such as little information about the user, integrating user knowledge into the existing processes, understanding the big picture before going into details and very little interaction between the end users. They argue that the current state in companies is that users are involved in different departments in several ways, but it is difficult to make sure whether users influence the actual development process or the product (Heiskari and Lehtola, 2009). They conclude that the main principle of user involvement is to gain a thoughtful understanding of user needs and fulfill those requirements in an effective and efficient way during development, not necessarily to have users participate with developers (Heiskari and Lehtola, 2009). With this theory of user involvement, this study will be conducted using a method called action design research. The outcome of adopting user involvement will be reflected upon in the discussion. 3- ACTION DESIGN RESEARCH (ADR) 3.1- WHAT IS ACTION DESIGN RESEARCH? Action design research (ADR) (Sein et al., 2011) is relatively new and has its roots from both design research (Hevner et al., 2004) and action research (Susman and Evered, 1978). When defining action design research, it is important to consider both design research and action research: Design research involves developing an ensemble of IT artifacts to solve a practical problem, the design of the artifact is in this case the focus of the research process and the organizational intervention is considered secondary. However, in action research, the researcher is often part of the team in the organization where the research project is taking place as opposed to having a more observational role (Sein et al., 2011). Action design research tries to bring the best of these two methods and bridge the gap between research and practice. According to Sein et al., (2011), current design research methods pay little to no attention to the shaping of the artifact by the organizational context. Also, current design research methods assign the evaluation to a separate phase after the building of the artifact. Sein et al. (2011, p. 37) write that “...they value technological rigor at the cost of organizational relevance, and fail to recognize that the artifact emerges from interaction with the organizational context even when its initial design is guided by the researchers’ intent.”. Although there has been earlier attempts to combine organizational intervention into design research methods (Iivari, 2007), they still separate the different stages (intervention, building and evaluation). To this end, action design research is a method that seeks to generate design knowledge by building an innovative IT artifact with the organizational context from which it emerges constantly in mind. Table 2 demonstrates the different characteristics of action research, design research and action design research: <table> <thead> <tr> <th>Table 2. Difference between AR, DR and ADR</th> </tr> </thead> <tbody> <tr> <td><strong>Action research</strong></td> </tr> <tr> <td>● Researcher is tasked to solve an immediate problem in an organization through intervention</td> </tr> <tr> <td>● Involves theory generation (Sein et al, 2011)</td> </tr> <tr> <td>● Tries to link theory with practice</td> </tr> <tr> <td><strong>Design research</strong></td> </tr> <tr> <td>● Seeks to develop an IT artifact to address a class of problems</td> </tr> <tr> <td>● Development is followed by evaluation. “build and then evaluate”</td> </tr> <tr> <td>● Organizational intervention is secondary</td> </tr> <tr> <td><strong>Action design research</strong></td> </tr> <tr> <td>● Recognition of the organizational setting from which the need of an IT artifact is born.</td> </tr> <tr> <td>● The stages of building, intervention and evaluation are inseparable.</td> </tr> <tr> <td>● Aims at building innovative artifacts in an organizational context and learning from the intervention while solving a problem (Sein et al, 2011)</td> </tr> </tbody> </table> 3.2- RESEARCH SETTING The study was conducted in close collaboration with SAFER which is a joint research unit between the Swedish automotive industry, academia, and authorities where these partners cooperate within the field of vehicle and traffic safety. We had daily access to the practitioners involved with accidents causation data analysis and so could interact with them when needed. The practitioners were directly dealing with the problem to be solved: automating the process of aggregating DREAM charts through the development of a prototype. Meetings to discuss the functional and nonfunctional requirements were held. Potential users were also involved through demos of the prototype in order to get feedback and suggestions. Section 3.4 explains more in detail about the development and the interaction with SAFER in regards to action design research. Another important aspect to mention here is the difference between researchers, practitioners and investigators. While the people we collaborate with are researchers at SAFER, conducting investigations at the scene of an accident is part of their research. Therefore, investigators and practitioners refer to the same group of people (researchers at SAFER). In this paper, they are called practitioners and the term researchers refer to the authors of this study. 3.3- Motivation for using ADR Action design research was selected as research method given that two things were explicit from the start of the study. First, SAFER were looking for a prototype implementation of a tool designed to assist in traffic accident analysis. Second, SAFER wanted to be involved in the decision making of the development at all stages of the process. These two main reasons were later further supported by the fact that an iterative development process was adopted. ADR in itself is based on an iterative approach which makes it a good fit for the development process. Consequently, as these three attributes of ADR match what SAFER wanted from the study; action design research was identified as a highly suitable candidate. Action research (AR) (Susman and Evered, 1978) was considered based on the collaborative element that is central there also. However, Olsson (2011) argues that action research is iterative more in terms of whole cycles of research and not as much within each cycle, and also not as design artifact centric. Therefore ADR represents a more suitable choice. While ADR is a newly formed research method, the fact that it is informed by both the highly established but strictly design oriented design research (DR) (Hevner et al, 2004), and ARs collaborative elements, meant that relying on ADR also allowed this research to contribute as an early adopter of the novel research method. The goal was then to develop a novel prototype shaped not only by our design principles but also the organization from which it emerges (SAFER). Therefore, it was natural to adopt ADR as research method because of its emphasis on organizational context which is considered a key characteristic. Another reason for choosing ADR is the iterative process of evaluation of the artifact; In action research, the evaluation phase is done after the development. ADR on the other hand emphasizes that evaluating the IT artifact and the intervention in the organization should be done constantly as Sein et al. (2011) argue. The decision to involve the user in an iterative manner during each sprint (see Section 2.2) meant having a research method that stresses the importance of the interwoven activities of building, intervention and evaluation worked well with the quality assurance strategy adopted. 3.4- How we used ADR This study first started with the problem formulation phase where the research problem was perceived. In our case, it was SAFER who perceived the need for the prototype. According to Sein et al. (2011), identifying and conceptualizing the research problem has to be done first. An initial meeting was held with the practitioners at SAFER where they explained the aggregation process used. We diagnosed a resource heavy and time consuming process as the problem with the current chart aggregation approach. According to SAFER, if they need to aggregate charts built using DREAM they have to dedicate a lot of time and manual labor to work using different tools such as Microsoft Excel and additional manual analytical work. The current approach was also lacking many features that SAFER wanted such as chart manipulation and visualization options. We concluded that fixing the current process of chart aggregation was not feasible without the introduction of a computer aided analysis tool. Based on this, the goal was to build a prototype tool that would not only assist with the charts aggregation process but also respond to therequests that the practitioners had through the implementation of various features that were identified. Based on the first phase, the building, intervention and evaluation (BIE) stage was started by foreseeing an automated computer aided analysis tool that would help SAFER with chart aggregation by increasing the speed and saving time through human resources reduction. According to Sein et al. (2011), this stage is where the building of the artifact, intervention in the organization and the evaluation take place concurrently. As mentioned in section 2, the approach SAFER uses depend on the DREAM method (Ljung, 2002) from the start when data is collected at the scene of an accident to the end when classification of data into charts occurs. Sein et al. (2011) identified a principle in ADR called theory-ingrained artifact which means that the artifact to be developed should be informed by theories. Based on this, the DREAM method itself must be used as a theoretical driver for the development of the prototype: According to Sein et al. (2011), two types of theories are best suited for action design research as defined by Gregor (2006): - Theory for explaining and predicting which implies understanding the cause and prediction while describing the theoretical constructs and the relationships between them. - Theory for design and action which is concerned with how to build something. This type focuses primarily on the theoretical knowledge that is used in the development of software systems. The DREAM method is used as a theoretical driver in our case since it explains how to proceed with the development of the prototype itself (in theory) in terms of implementation. This is in fact consistent with the Gregor’s (2006) definition of theory for design and action. Therefore, using DREAM principles as theoretical knowledge in the development makes our theory a design and action one. Indeed, the prototype implementation will follow the same components of DREAM such as charts and aggregated charts as well as DREAM concepts like phenotypes and genotypes. Basing the prototype on the DREAM method would provide a tool that works in a way that is familiar to the practitioners since they were already working with DREAM manually. In addition to that, DREAM is widely used already by SAFER and its partners in Europe. However, no evidence of a computer tool that implements DREAM was found. By making this tool available, any researcher that is familiar with DREAM would benefit from it in their work. Once the decision had been made on what theoretical lens to rely on during the prototype development, we continued into the iterative BIE stage. Documents relevant to how the DREAM method works were collected. Understanding how SAFER used DREAM to organize the data was vital in order to base the prototype on it. At this point, the focus shifted to the iterative process of the BIE stage. Sein et al. (2011) argue that this phase determines the source of innovation which could result from the artifact design or the organizational intervention. The authors identify an IT-dominant BIE and an organization-dominant BIE. The IT dominant BIE is recommended if the goal is to create an innovative design. An IT-dominant BIE was picked because our intervention in the organization is IT-centric. Furthermore, our intervention is low level (accident causation analysis) as opposed to an organizational wide one favored by an organization dominant BIE. Additionally, Sein et al. (2011) note that in an IT-dominant BIE the practitioners should first influence the design which is what we do at this stage as described earlier (continuous feedback). Second, the early versions of the design serve as lightweight interventions in a limited context (Sein et al., 2011). Indeed, our early iterations of the design were only limited in organizational intervention to the practitioners that were directly involved in accident causation analysis in the organization. Likely, only at a later stage will the more mature versions of the prototype be introduced to a wider set of practitioners for more refinement through use in the organizational setting and context. Figure 3 below shows the generic schema of an IT-dominant BIE. With the IT-dominant BIE acting as the design continuum at this stage, an initial design of the prototype was developed and then revised and shaped by SAFER before the implementation started. The process of shaping the prototype and developing it was then performed in an iterative process that continued throughout the design cycles that not only involved us but also the practitioners. The practitioners that could be seen as the final users too were continuously involved in each iteration where they provided feedback on the features that had just been implemented as well as guidance on things that needed to be changed and in what way. Live demos were constantly conducted to show how the prototype worked. We also focused on the principle of concurrent evaluation as opposed to it being a separate stage which is also another important principle of ADR. The head practitioner was heavily involved when the artifact was in the alpha stage of development. Subsequently, the prototype evolved (through organizational intervention) into a more mature artifact (beta version) which allowed it to be deployed to a wider organizational context. The objective of this wider evaluation is the continuous refinement of the tool (Sein et al., 2011). The third step of ADR is reflection and learning, the objective of stage is to reflect on the design during the project and evaluate the adherence to principles (Sein et al., 2011). Section 4 reflects on the learning outcomes of this study and discusses the implications. The last stage of ADR is the formalization of learning, at this point the goal is to move from the specific to the generic (Sein et al., 2011) and provide a set of design principles for a class of field problems. This study is the first one that uses ADR within the field of accident causation analysis. Therefore, further studies are needed in order to develop more concrete results that can provide mature design principles in this area. 3.5. Data Collection. The different phases of our study included multiple sources of data such as meetings, related literature papers, qualitative interviews and live demos. The data collection mostly covered topics such as the theory behind the DREAM method, the functional requirements of the tool and the needs of SAFER. Table 3 below summarizes our data collection procedure during every phase of ADR (Henfridsson and Olsson, 2007): <table> <thead> <tr> <th>Stage 1: Problem formulation</th> </tr> </thead> <tbody> <tr> <td>The problem formulation started with an initial meeting where SAFER explained the practical problem. We collected documents and research papers about DREAM.</td> </tr> <tr> <td>Data sources:</td> </tr> <tr> <td>- Meetings</td> </tr> <tr> <td>- SAFER documents (research papers relevant to the problem)</td> </tr> <tr> <td>- Literature related to accident causation analysis</td> </tr> </tbody> </table> Stage 2: Building, Intervention, and Evaluation This stage was done in the form of sprints. We held meetings continuously with a senior practitioner to refine the design. The prototype was also demonstrated numerous times to gather feedback. Interviews were also conducted to gather requirement related data. Data sources: - Design meetings - Demos - Interviews Stage 3: Reflection and Learning The goal of stage three is to analyze intervention results and evaluate adherence to principles (Sein et al, 2011). The prototype was tested with the head practitioner at SAFER to make sure it follows DREAM’s theoretical principles. This was mostly done through live demos where the practitioners used the prototype. A lunch seminar at SAFER was also arranged where we presented the tool to a wider set of users. Data sources: - Demos - Lunch seminar Stage 4: Formalization of Learning This phase is characterized by abstracting the learning outcomes into a class of problems. In our case. Hence, more research is needed in order to establish abstract design principles. 4- Data and Discussion In this section, we discuss the implications of our study at SAFER. Also, the collected data is used to illustrate what is discussed here. The data is presented in the form of episodes where actual data or events are described. In section 4.1, the practical implications of our study are discussed, mainly the tool itself as well as the practical organizational interventions. Next, in section 4.2, the implications of using user involvement and DREAM are presented. Design principles related to the last stage of ADR are also presented. In section 4.3, the use of ADR is discussed and reflected upon. 4.1- Practical Implication 4.1.1- The Artifact When creating the artifact, much emphasis was put on the graphical aspect of representing the data in such a way that it would give as much as an overall layout as possible. The graphical representation (also known as information visualization) had to be fitted according to what kind of data and what information the user is looking to extract from that data in terms of by searching or by coincidence i.e. the representation has to be done in such a way that if a human knows what he or she is looking for they should be able to spot it. The user should also be able by just browsing the representation, extract information that he or she may not have been looking for but existing none the less. In this case, the data in correlation with the analysis method (DREAM) is depicted as a series of chain-of-events method (Sandin, 2008). Sandin (2008) presented multiple types of information visualization for chain-of-events methods with figure 4 displaying some of them. During the early stages of research it was evident that the choice of how to visualize information varied between users of the DREAM method. Even though this was the case, seeing as how the DREAM method had such a consistent and general way of grouping causes and effects leading to consequences, in accident causation terms all data was always most readable drawn as either single event or multi linear event sequence. With this in mind, the tool was designed following the structure of the aggregated charts which in turn follow the multi linear event sequence method of presenting the diagram. This because most practitioners at SAFER used this principle of explaining the flow of the sequence (from left to right) with the most occurring variation being reading the sequence backwards from top to bottom. The tool was therefore designed to follow the multi linear event sequence giving the possibility to change flow of the sequence As seen in figure 5 the user is given multiple choices as to how not only the diagram is rendered but also the layout of it such as: margins, size of text, direction of arrows, possibility to view without arrows, etc. This gives each user the possibility to view the charts in a manner that makes it as readable as possible for each individual. Prior to data aggregation, each chart is presented in a drop-down list. Once clicked, a chart is then displayed in a tab page shown in the main display area. By displaying them in tab pages it is easy to cycle between chosen charts and quickly get an overhead of the difference rather than to display each chart one at a time. This design decision also relates to the aggregated charts as once these are created, are added in the drop-down list. The actual aggregation is done by selecting charts to be part of the aggregation and giving the aggregated chart a name. This chart is then added in the drop-down list and displayed on the screen (see figure 6). It is then possible to view the number of occurrences for connections in the chart as well as filtering away when there are less than a certain number of connections between nodes in the chart. With the help of the tool developed, the user can simply import DREAM data and aggregate thousands of charts. The aggregation is done automatically. This saves a lot of time previously dedicated to this task (weeks). Before, several people that know DREAM had to spend a lot of time on aggregating charts. The chances of errors were high and many tools were involved. DREAM-AT should make this easier. We tested the method of aggregating the charts manually and in our case aggregating 3 charts took as long as half a day, while the tool does the same aggregation in seconds. 4.1.2 - ADR EFFECTS ON THE ORGANIZATION First, it is important to reflect on our research method and how it helped us solve a practical problem in regards to the goals of ADR. Sein et al. (2011) argue that action design research aims to address a problematic situation while building an innovative artifact in an organization and learning from the intervention. In our case, we built a prototype that automates charts aggregation during the organizational intervention. Furthermore, both the developers (us) and SAFER benefited from this collaboration. In fact, during our intervention, we brought change to some SAFER practices in a way that improved their work as illustrated in the examples below: Episode 1: The first organizational intervention was caused by the comma separated files (CSV) exported from a database. It was suggested by the researchers to have a function that simply exports all DREAM related tables instead of exporting each file separately. SAFER participants took it into consideration and delivered the idea to the developers of the database. At the end, the new feature was added and the current database system (DaCoTa) allows exporting all the DREAM related tables instead of selecting them manually. Episode 2: The CSV files use the comma (,) as a separator for the fields. Some practitioners use commas in the text when they enter data into the database. This resulted in corrupted CSV files when exporting because a CSV also interprets the commas that are part of a field (text) as a separator. We suggested that the separator of the CSV files used by the system to be changed to tabulation instead of a comma. SAFER participants agreed on this and were willing to update this. Doing so made the prototype error free when importing CSV files. Practitioners that enter data into the database could also use commas freely without corrupting the exporting process. Episode 3: Another example of organizational intervention relates to one of the features implemented towards the end of our study. SAFER participants required one function to filter all the DREAM charts after they had been loaded into the prototype. Initially, the goal was to use an extra CSV file that contains additional data about accidents. The extra file was to be imported from a new accident case management system that SAFER is developing. However, since the new system is still being developed, we needed to agree on how the prototype would work with it. After meetings and demos, it was decided that the prototype would rely on an extra filtering file (CSV) with two fields used to identify charts previously loaded and filtered. The above examples show how researchers work together with an organization to improve the work practices and learn from each other which is one of the goals of ADR. It is worth mentioning that the second episode also benefits from ADR via frequent communication and knowledge sharing principle. 4.2 THEORETICAL IMPLICATION 4.2.1 EFFECTS OF USER INVOLVEMENT In this section, the effects of user involvement are first presented in the form of examples (episodes) to show the results of user involvement in the organization. Next we discuss the theoretical implications of user involvement in this study. As mentioned in section 2.2, our collaborating organization lacked an understanding of their needs in regards to the prototype in the beginning. We therefore opted for a development approach that favored user needs. This was a good decision because we could on multiple occasions extract more detailed user requirements. Indeed, some features were not clear enough until we showed one of the users (practitioners at SAFER) how it was developed and how we perceived the feature from our perspective. Episode 1: During a meeting, the goal was to determine how the tool would communicate with other systems already in place at SAFER. The practitioners proposed an initial design. When the implementation of this design started, we continuously involved the practitioners to the point where everyone realized the initial design suggested was not feasible due to time constraints and the risk of duplicate requirements. The design was eventually revised into a feasible requirement. Figure 7 shows the initial design on a whiteboard. Episode 2: Another example of the tight collaboration based on user involvement between researchers and practitioners is the negotiation of function implementation details. SAFER practitioners preferred to have the possibility to filter out DREAM charts already loaded into the prototype with an extra imported file that would enable interfacing with other systems. This feature was unclear in the beginning and would take a long time to implement since we had to study how other systems worked. But after the discussion of the time left for researchers with one of the user and how the prototype could be changed, we eventually managed to adjust the feature in a way that not only made it respond to user needs but also feasible within the timeframe left. The outcome of such close collaboration led us to some realization on our end as to how decisions in terms of design should be carried out between the researchers and the practitioners. Not only did the influence from the practitioners alter the way we conducted our work but also as to how we perceived the development process. The core concern of the artifact development was to relate to and understand the context and area of the problem; i.e. identifying design aspects for the prototype based upon the understanding of the method used by the practitioners (DREAM). Doing so therefore becomes a process during ADR seeing how the understanding of the goals existing for the artifact are gradually evolved as the understanding of the design improves as seen in figure 8 (Gasson, 1997). In our case the process is best defined as the means of creating a mutual pool of knowledge between the practitioners and the researchers in order to detect implications of the emergent design from both sides. An example of such understanding can be found in the first episode where the presently used method was explained. Once they had done that we then explain to them how we had perceived their explanation and how we expected to work in accordance to this. The first stages of the implementation of design was therefore to bring knowledge between the practitioners and the researchers as close together as possible in order to detect the most evident design decisions, more than the important design decisions because up-front, it is very hard to evaluate what design decisions are important. The degree of importance of certain design decisions is better to come later. The way of creating a shared pool of knowledge can be explained via figure 9 where the left circle represents knowledge possessed by the researchers in regards to software engineering, computer science, etc. whereas the right circle represents the knowledge possessed by the practitioners on the DREAM method and the process of analyzing around that method. The closer these two circles can be drawn the more eminent the emerged design can be, i.e. that the more the two parties share their understanding and interpretation of the knowledge the more can a shared set of knowledge emerge and make evident design decision. During development, researchers mostly base new design decisions on problems or ideas based on their expertise in the area of software engineering such as user interaction, graphical interface, etc. In retrospect, this could be because of the importance of actually getting a design idea physically done in order to properly evaluate it. At first glance, this process seems to lay importance on the side of the design researchers seeing as the emergent design can be produced without constant input and discussion between the two. This in turn could explain to some extent the poor user involvement during development phases (Majid et al., 2010). We found during development that while it is important for us to drive the development forward, it is important to share as much drive as possible with the practitioners and try to find the tricks to get the users involved as much as possible so that they feeling that they own the project as much as the researchers do, rather than just checking off now and again to see how it’s going. However, the design decisions that had the most impact, were not as radical in terms of decisions made that had a great sense of usefulness for the tool in comparison to the design decisions made once a practitioner actually saw what had been produced and therefore understood the design possibilities which had been previously unknown to them. Triggered by the realization of the importance that development of a shared understanding, we started demonstrating all features immediately after reaching a demonstrable state. One feature was even demonstrated 15 minutes after it had been implemented in a demonstrable state. Because that during the demonstrations we got more than just feedback from the practitioners, we actually had a conversation about the future of the tool which drove the emerging discussions to guide the direction of the design. Subsequently, this exemplifies how the notion of learning-by-doing and reflection-in-action helps capture the understanding of actual needs during design (Gasson 1997; Olsson 2011). In addition, when we started doing this the practitioners also became more engaged in the non-functional requirements such as to fit the way they as individuals would use the tool in accordance to DREAM to analyze the data. It also became apparent that the practitioner that had been part of the design discussions from the start also had the most relevant design suggestions in terms of what was feasible to develop in the time period, difficulty of implementation, etc. This meant that the practitioner in question knew enough about the researchers’ context so that the practitioner could propose design decisions that would not only benefit the artifact but also the means to develop it. By experiencing this we would like to emphasize importance of getting people involved early since it gets harder for people that get involved later to get a sense of understanding of what the emergent design decisions are and can be. Prior to the development of the artifact, emergent design was not a recognized notion in the development process for neither the researchers nor the practitioners. However, during the usage of the BIE the importance of this notion became apparent. For example, when trying to create a dialogue in the ADR of the goals for the practitioners and the researchers. As pointed out by Olsson (2011), gradual refinement of goals is no new phenomena within systems development but during our research we detected just how much of an impact that emergent design had in relation to how ADR emphasizes close cooperation between us as researchers and the practitioners (Sein et al., 2011). This also supported the implications that design goals and problems are emerging aspects, recognizing that designers guide toward what makes sense in the specific context of activity (Olsson, 2011). Thus the importance of emergent design shows that researchers using ADR can’t have too big expectations at the start of the research as to what is going to implemented in a research perspective. In that case it will only become a consultancy job and in which case ADR will not become a natural method for testing hypothesis. The way the design, interaction and how you as researchers work create the possibility to create knowledge. 4.3. Methodological Implication This study makes a methodological contribution by being an early adopter of action design research. Therefore, it is important that this paper reflects on \[\text{Figure 9. visualization of the interpretation of knowledge for the emergent design}\] the experience of using ADR. First, the method allowed us to intervene in an organization in order to solve a practical problem. The result was a prototype that was continuously refined by the researchers initial design decisions and the organizational context from which it emerged. Second, by adopting ADR the practices of SAFER improved when it comes to accident causation analysis and how it is conducted. Since action design research is a relatively new method (Sein et al, 2011), adopting it instead of action research or design research meant that no prior papers that used ADR were available as a reference. While it allowed us to solve a real world practical problem and improve the work practices, more work is needed before mature design principles can be developed for the potential class of problems within the field of accident causation analysis. Sein et al (2011) mention that once a beta version of the artifact is available, it should be refined further in a wider organizational context (stage three), this study did get to phase three but didn’t get through all of it. As a result, more studies and research using action design research ought to be conducted if ADR is to be established as a credible research method. 5- Conclusion This study set out to assess the feasibility of an automated computer aided analysis tool for traffic accident causation data through a prototype implementation. Through action design research, we have not only developed a theory-ingrained IT artifact to assist in traffic accident analysis but also presented a set of design principles that are likely to be important to researchers within data mining and more specifically accident causation data analysis. In terms of future research, we particularly suggest more studies that adopt action design research as a method to fully explore its positive and negative aspects, specifically, a study that succeeds in developing mature design principles. It would also be interesting to see a study that assesses the work practices of SAFER after the introduction of the tool. To this end, this research provides a first glimpse of what an ADR study could be, we encourage other researchers to conduct more ADR related studies in full organizational settings. REFERENCES Das, V.V. 2007. Involvement of users in Software Requirement Engineering. MES College of Engineering, Kuttippuram, Kerala, India. Published in IEEE Heiskari, J., Lehtola, L. 2009. Investigating the State of User Involvement in Practice. Software Business and Engineering Institute, Helsinki University of Technology. Published in IEEE
{"Source-Url": "https://gupea.ub.gu.se/bitstream/2077/30031/1/gupea_2077_30031_1.pdf", "len_cl100k_base": 10985, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 51183, "total-output-tokens": 12915, "length": "2e13", "weborganizer": {"__label__adult": 0.0011386871337890625, "__label__art_design": 0.0037631988525390625, "__label__crime_law": 0.0019550323486328125, "__label__education_jobs": 0.0290679931640625, "__label__entertainment": 0.00041747093200683594, "__label__fashion_beauty": 0.0005421638488769531, "__label__finance_business": 0.0014705657958984375, "__label__food_dining": 0.0009546279907226562, "__label__games": 0.0025310516357421875, "__label__hardware": 0.0034160614013671875, "__label__health": 0.00176239013671875, "__label__history": 0.0021686553955078125, "__label__home_hobbies": 0.0004227161407470703, "__label__industrial": 0.0023097991943359375, "__label__literature": 0.002292633056640625, "__label__politics": 0.0010776519775390625, "__label__religion": 0.00110626220703125, "__label__science_tech": 0.391845703125, "__label__social_life": 0.0005612373352050781, "__label__software": 0.02838134765625, "__label__software_dev": 0.48779296875, "__label__sports_fitness": 0.0006585121154785156, "__label__transportation": 0.033782958984375, "__label__travel": 0.0006132125854492188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59081, 0.02524]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59081, 0.50675]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59081, 0.95209]], "google_gemma-3-12b-it_contains_pii": [[0, 191, false], [191, 1560, null], [1560, 6173, null], [6173, 8585, null], [8585, 11508, null], [11508, 15272, null], [15272, 19912, null], [19912, 24176, null], [24176, 28883, null], [28883, 32861, null], [32861, 35639, null], [35639, 38575, null], [38575, 41075, null], [41075, 43714, null], [43714, 46437, null], [46437, 49845, null], [49845, 52987, null], [52987, 57194, null], [57194, 59081, null]], "google_gemma-3-12b-it_is_public_document": [[0, 191, true], [191, 1560, null], [1560, 6173, null], [6173, 8585, null], [8585, 11508, null], [11508, 15272, null], [15272, 19912, null], [19912, 24176, null], [24176, 28883, null], [28883, 32861, null], [32861, 35639, null], [35639, 38575, null], [38575, 41075, null], [41075, 43714, null], [43714, 46437, null], [46437, 49845, null], [49845, 52987, null], [52987, 57194, null], [57194, 59081, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59081, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59081, null]], "pdf_page_numbers": [[0, 191, 1], [191, 1560, 2], [1560, 6173, 3], [6173, 8585, 4], [8585, 11508, 5], [11508, 15272, 6], [15272, 19912, 7], [19912, 24176, 8], [24176, 28883, 9], [28883, 32861, 10], [32861, 35639, 11], [35639, 38575, 12], [38575, 41075, 13], [41075, 43714, 14], [43714, 46437, 15], [46437, 49845, 16], [49845, 52987, 17], [52987, 57194, 18], [57194, 59081, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59081, 0.11443]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
85b1e4fdd6717dbe43bb4782147ca659e315c429
ROSCOE USER GUIDE Version 1.0 by Ron Tischler Marvin Solomon Raphael Finkel Computer Sciences Technical Report #336 September 1978 ROSCOE USER GUIDE Version 1.0 September 1978 Ron Tischler Marvin Solomon Raphael Finkel Technical Report 336 Abstract Roscoe is a multi-computer operating system running on a network of LSI-11 computers at the University of Wisconsin. This document describes Roscoe from the viewpoint of a user or a writer of user-level programs. All system service calls and library routines are described in detail. In addition, the command-line interpreter and terminal input conventions are discussed. Companion reports describe the purposes and concepts underlying the Roscoe project and give detailed accounts of the Roscoe utility kernel and utility processes. TABLE OF CONTENTS 1. INTRODUCTION.................................................. 1 1.1 Purpose of this Document...................................... 2 1.2 Caveat.......................................................... 3 1.3 Format of this Guide.......................................... 3 2. ROSCOE CONCEPTS AND FACILITIES............................... 3 2.1 Links and Messages........................................... 7 2.2 Processes..................................................... 7 2.3 Timing........................................................ 8 2.4 Interrupt Level Programming................................ 8 2.5 Input/Output................................................. 9 2.6 Miscellaneous Routines...................................... 10 2.7 Preparing User Programs.................................... 10 3. ROSCOE PROGRAMMER'S MANUAL.................................... 10 3.1 Awaken (Service Call)........................................ 10 3.2 Call (Library Routine)....................................... 11 3.3 Close (Library Routine)..................................... 12 3.4 Copy (Library Routine)...................................... 12 3.5 Create (Library Routine)................................... 12 3.6 Date (Service Call).......................................... 13 3.7 Datetol (Library Routine).................................. 13 3.8 Destroy (Service Call)...................................... 13 3.9 Die (Service Call)........................................... 14 3.10 Fork (Library Routine)...................................... 14 3.11 Fgetline (Library Routine)................................ 15 3.12 Handler (Service Call)...................................... 15 3.13 Inline (Library Routine)................................... 16 3.14 Kill (Service Call).......................................... 16 3.15 Killoff (Library Routine).................................. 16 3.16 Link (Service Call)......................................... 17 3.17 Load (Service Call)......................................... 18 3.18 Ltolist (Library Routine).................................. 19 3.19 Nice (Service Call)......................................... 19 3.20 Open (Library Routine)..................................... 19 3.21 Outline (Library Routine)................................. 20 3.22 Parline (Library Routine).................................. 20 3.23 Print (Library Routine)..................................... 20 3.24 Read (Library Routine)..................................... 21 3.25 Readline (Library Routine)................................ 21 3.26 Recall (Library Routine) ........................................ 22 3.27 Receive (Service Call) ........................................ 22 3.28 Remove (Service Call) ........................................ 23 3.29 Seek (Library Routine) ........................................ 24 3.30 Send (service call) .......................................... 24 3.31 Setdate (Service Call) ....................................... 25 3.32 Startup (Service Call) ....................................... 25 3.33 Stat (Library Routine) ....................................... 26 3.34 Time (Service Call) ......................................... 27 3.35 Unlink (Library Routine) .................................. 27 3.36 Write (Library Routine) .................................... 27 4. CONSOLE COMMANDS ........................................... 28 4.1 alias <filename1> <filename2> ................................ 28 4.2 background <filename> <arg> ............................... 28 4.3 copy <filename1> <filename2> .............................. 29 4.4 delete <filename> ............................................. 29 4.5 directory <filename> .......................................... 29 4.6 help ............................................................. 29 4.7 kill <arg> ...................................................... 29 4.8 make <filename> ............................................... 29 4.9 run <filename> <arg1> <arg2> ............................... 30 4.10 set <modelist> or SET <modelist> ....................... 30 4.11 time <format> ................................................. 31 4.12 type <filename> .............................................. 31 5. CONSOLE INPUT PROTOCOLS ................................. 31 6. UTILITY PROCESS PROTOCOLS ............................... 32 6.1 Input/Output Protocols ..................................... 32 6.2 Resource Manager Protocols ............................... 35 1. INTRODUCTION Roscoe is an experimental operating system for controlling a network of microcomputers. It is currently implemented on a network of five Digital Equipment Corporation LSI-11 computers connected by medium-speed lines.* The essential features of Roscoe are: 1. All processors are identical. However, they may differ in the peripheral units connected to them. Similarly, all processors run the same operating system kernel. 2. No memory is shared between processors. All communication is done by explicit passing of messages between physically connected processors. 3. No assumptions are made about the topology of interconnection except that the network is connected (that is, there is a path between each pair of processors). The lines are assumed to be sufficiently fast that fairly tight interaction is possible between processes on different machines. 4. The network should appear to the user to be a single machine. A process runs on one machine, but communicating processes have no need to know and no way of finding out if they are on the same processor. *This equipment was purchased with funds from National Science Foundation Research Grant #MCS77-08958. 1.1 Purpose of this Document This document describes Roscoe from the point of view of a user or user-programmer. It is both a tutorial and a reference guide to the facilities provided to the user. All information necessary to the programmer of applications programs should be found here. Further discussion of the concepts and goals of Roscoe are discussed in [Solomon and Finkel 78]. That document also lists some research problems that the Roscoe project intends to investigate. The operating system kernel that provides the facilities listed below is described in considerable detail in [Finkel and Solomon 78]. Similar detailed documentation about utility processes (such as the File System Process, the Teletype Driver, the Command Interpreter, and the Resource Manager) is contained in [Tischler, Finkel, and Solomon 78]. Roscoe has been developed with extensive use of the UNIX operating system [Ritchie and Thompson 74]. All code (with the exception of a small amount of assembly language) is written in the C programming language [Ritchie 73]. The reader of this document is assumed to be familiar with both UNIX and C. A new programming language is being designed for applications programs under Roscoe; it will be described in a future report. However, this version of the Roscoe User Guide assumes that all Roscoe software is written in C. 1.2 Caveat Roscoe is in a state of rapid flux. Therefore, many of the details described in this Guide are likely to change. The reader who intends to write Roscoe programs should check with one of the authors of this report for updates. 1.3 Format of this Guide Section 2 provides an overview of the concepts and facilities of Roscoe. It is organized according to general subject areas. Specific functions are mentioned but not described in full detail. Section 3 is a programmer's reference manual. Each function is listed alphabetically, its syntax and purpose are described, and it is classified as a Service Call (an invocation of an operating system kernel routine) or a Library Routine (a procedure linked into the user program). Section 4 describes the command line interpreter and lists the commands that may be entered from the terminal. Section 5 describes the conventions governing terminal input/output. Section 6 presents protocols for communicating with the various utility processes. 2. ROSCOE CONCEPTS AND FACILITIES The fundamental entities in Roscoe are: files, programs, core images, processes, links, and messages. The first four of these are roughly equivalent to similar concepts in other operating systems; the concepts of links and messages are idiomatic to Roscoe. A file is a sequence of characters on disk. Each file has directory information giving the time of last modification and restrictions on reading, writing, and execution. The contents of a file may contain header information that further identifies it as an executable program. Version 1 of Roscoe uses the UNIX file system; therefore, the reader familiar with UNIX should have no problem understanding Roscoe files. Program files contain text (machine instructions), initialized data, and a specification of the size of the uninitialized global data space (bss) required by the program. Program files also contain relocation information and an optional symbol table. A process is a locus of activity executing a program. Each process is associated with a local data area called its stack. A program that never modifies its global initialized or bss data but only its local (stack) data is re-entrant, and may be shared by several processes without conflict. A main-storage area containing the text of a program, its initialized data, and a bss data area, but not including a stack, is called a core image. The initiation of a process entails locating or creating (by loading) a core image, allocating a stack, and initializing the necessary tables to record its state of execution. Similarly, when a process dies, its tables are finalized and its stack space is reclaimed. If no other processes are executing in its core image, then the space occupied by the core image is available for re-use. Some processes, called utility processes, provide facilities to other processes, such as device or file management. Utility processes may invoke service calls not intended to be used by the casual user, but otherwise they behave exactly like user processes. A link combines the concepts of a communications path and a "capability." A link represents a logical one-way connection between two processes, and should not be confused with a line, which is a physical connection between two processors. The link concept is central to Roscoe. It is inspired and heavily influenced by the concept of the same name in the Demos operating system for the Cray-1 computer [Baskett 77]. Each link connects two processes: the holder, which may send messages over the link, and the owner, which receives them. The holder may duplicate the link or give it to another process, subject to restrictions associated with the link itself. The owner of a link, on the other hand, never changes. Links are created by their owners. When a link is created, the creator specifies a code and a channel. The kernel automatically tags each incoming message with the code and channel of the link over which it was sent. Channels are used by a process to partition the links it owns into subsets: When a process wants to receive a message, it specifies a set of channels. Only a message coming over a link corresponding to one of the specified channels is eligible for reception. A link is named by its holder by a small positive integer called a link number, which is an index into a table of currently-held links maintained by the kernel for the holder. All information about a link is stored in this table. (No information about a link is stored in the tables of the owner.) A message may be sent by the holder to the owner of a link. In addition, certain messages are manufactured by the kernel to inform the owner of a link of changes in its status. For example, the creator of a link may specify that when the link is destroyed, a DESTROYED notification be sent along it. Such messages are identified to the recipient by an unforgeable field. A message may contain, in addition to MSLEN (currently 40) characters of text, an enclosed link. The sender of the message specifies the link number of a link it currently holds. The kernel adds an entry to the link table of the destination process and gives its link number to the recipient of the message. In this way, the recipient becomes the holder of the enclosed link. If the original link is not destroyed, the sender and the recipient hold identical copies of the link. There are two kinds of links: request and reply. A reply link is distinguished by the fact that it can only be used once; it is destroyed when a message is sent over it. A reply link may not be the enclosed link in a message sent over another reply link. Similarly, a request link cannot be sent over a request link. These restrictions enforce a communication protocol in which one process does most of the talking, over a REQUEST link, and can be answered once for each enclosed REPLY link. The remainder of this section lists service calls and library routines by subject area. 2.1 **Links and Messages** A new link is created by a process through the "link" service call. Initially, the creator is both holder and owner of the link. Messages are sent with the "send" service call, which specifies a link over which the message is to be sent, the message text and an optional enclosed link. Messages are accepted by "receive," which specifies a set of channels, a place to put the message, and a maximum time the recipient is willing to wait. "Receive" can also be used to sleep a specified period of time by waiting for a message that will never arrive. A simple send-receive protocol is embodied in the library functions "call" and "recall," which are simpler to use than send and receive, and should be adequate for most routine communication. 2.2 **Processes** A process may spawn others by communicating with the Resource Manager; typical cases are handled by "fork". When calling "fork", the parent may indicate a link that it wishes to give to the child; the child obtains this link with "parline". In certain cases the parent can kill the child with "killoff"; in other cases a control-C entered at the terminal can have this effect. Every user process is born holding link number 0, whose destination is the Resource Manager on that process's machine. A process can terminate itself by calling "die"; and can sleep by using either "nice" or "receive". The service calls "load", "startup", "kill", and "remove" control core images and processes. They are used by the Resource Manager and are not intended for the typical user. 2.3 Timing Roscoe has two notions of time. One is the wall clock, which keeps track of seconds in real time. Messages sent between Resource Managers are routinely used to keep the various machines synchronized. There is also an interval timer, which may be used to monitor elapsed time in increments of ten-thousandths of seconds. No process may change the interval timer. The wall clock is referenced, changed, enciphered, and deciphered by "date", "setdate", "datetol", and "ltodate", respectively. The interval timer is referenced by "time". 2.4 Interrupt Level Programming User programs may handle their own interrupts. A process may establish an interrupt-level routine with the "handler" call. The interrupt-level routine should, of course, be thoroughly debugged and fast. Interrupt-level routines may notify the process that established them by calling "awaken"; the process to be notified uses "receive" to obtain this notification. Only the Teletype Driver uses this feature. 2.5 **Input/Output** To use files, a process first obtains a link to the File System Process by calling "fslink". This link is used in subsequent "create", "open", "stat", "alias", and "unlink" calls, which behave much like the UNIX calls with similar names. "Open" and "create" calls return links to be used for performing "stat", "close", and all input/output operations on the open file. To use the terminal, a process obtains input and output links by calling "inline" and "outline", respectively. An input link can be used to discover or change terminal modes (only the Command Interpreter uses this feature) and to perform terminal input. An output link can be used for terminal output. These links may also be "closed"; they are closed automatically when a process dies. The Teletype Driver allows at most one input link to be open at a time. Reading is performed by the routines "read" and "readline". Writing is performed by "write" and, if formatting is desired, by "print". The service call "printf" is identical to "print" except that it does direct terminal output; it is a debugging tool not intended for the typical user. Reads and writes are not more efficient with buffers of size 512, because Roscoe splits up I/O into packets of size MSLEN bytes anyway. 2.6 **Miscellaneous Routines** The following routines from the C library also exist in the Roscoe library: `atoi`, `long arithmetic routines`, `reset`, `setexit`, `strcpy`, `streq`, `strge`, `strgt`, `strle`, `strlen`, `strlt`, `strne`, and `substr`. An additional routine supplied by Roscoe is "copy". 2.7 **Preparing User Programs** User programs for Roscoe are written in the C programming language. They are compiled under UNIX on the PDP-11/40 in the directory "/usr/network/roscoe/user" and should include the files "user.h" and "util.h". Source programs should have filenames ending with ".u". To prepare a file named "foo.u", execute "makeuser foo", which creates an executable file for Roscoe named "foo". 3. **ROSCOE PROGRAMMER'S MANUAL** The following is an alphabetized list of all the Roscoe service calls and library routines. 3.1 **Awaken (Service Call)** `awaken()` Only an interrupt-level routine may use this call. It sends a message to the process that performed the corresponding "handler" call along the channel specified by that "handler" call. Returned values: Success returns a value of 0. -2 is returned if the message cannot be sent because no buffers are available; an "awaken" may succeed later. 3.2 Call (Library Routine) int call(ulink,outmess,inmess) char *outmess,*inmess; This routine sends a message to another process and receives a reply. The link over which the message is sent is "ulink", which should be a REQUEST link. The argument "outmess" points to the message body to be sent, of size MSLEN. Similarly, "inmess" points to where the reply body, of size MSLEN, will be put. If "inmess" is 0, any reply will be discarded. An error is reported if the reply does not arrive in five seconds (see "recall"). In normal cases, the return value is the link enclosed in the return message; it is -1 if there isn't any enclosure. Ignoring errors, the user may consider this routine an abbreviation for: ``` struct urmesg urmess; struct usmesg usmess; usmess.usbody = outmess; send(ulink,link(0,CHAN16,REPLY),&usmess,NODUP); urmess.urbody = inmess; receive(CHAN16,&urmess,5); return(urmess.urlnenc); ``` Returned values: Under normal circumstances, the return value is either -1 or a link number. -2 means an error occurred while sending, -3 means the waiting time expired, -4 means that the return link was destroyed, -5 means that something was received with the wrong code, -6 means that a return link couldn't be created in the first place. NOTE: CHAN16 is implicitly used; for this reason, the user is advised to avoid this channel entirely. Several other library routines also invoke "call", and thus use CHAN16. NOTE: "Call" is not re-entrant, and so programs that use it cannot be "SHARED" (see "fork"). 3.3 Close (Library Routine) int close(file) The argument "file" is either a link to an open file, or a terminal input or output link. The returned value is 0 on suc- cess, negative on failure (specifically, "close" is synonymous with "destroy"). These links are automatically closed when a process dies; however, execution of this command gives the pro- cess more room in its link table. Also, closing the teletype in- put makes it possible for another process to open it. 3.4 Copy (Library Routine) copy(to,from) char *to,*from; A string of length MSLEN is copied from "from" to "to". If "from" is 0, then MSLEN nulls are copied instead. 3.5 Create (Library Routine) int create(fslink,fname,mode) char *fname; If the file named "fname" exists, it is opened for writing and truncated to zero length. If it doesn't exist, it is created and opened for writing. The argument "fslink" is the process's link to the File System. The protection bits for the new file are specified by "mode"; these bits have the same meaning as for UNIX files, but all files on Roscoe have the same owner. The returned values are as in "open". 3.6 Date (Service Call) long date(); This service call returns the value of the wall clock, which is a long integer representing the number of seconds since midnight, Jan 1, 1973, CDT. 3.7 Datetol (Library Routine) long datetol(s) char s[12]; This library routine converts a character array with format "yymmddhhmmss" into a long integer, representing the number of seconds since midnight (00:00:00) Jan 1, 1973. It accepts dates up to 991231235959 (end of 1999); -1 is returned on error. 3.8 Destroy (Service Call) int destroy(ulink) Link number "ulink" is removed from the caller's link table. Returned values: 0 is returned on success. -1 means that the link number is out of range, -2 means that it has an invalid destination. 3.9 Die (Service Call) die() This call terminates the calling process. All links held by the calling process are destroyed. 3.10 Fork (Library Routine) int fork(fname, arg, mode) char *fname; The Resource Manager starts a new process running the program found in the file named "fname", which must be in executable load format. The function named "main" is called with the integer argument "arg". "Mode" is a combination (logical "or") of the following flags, defined in "user.h": one of these: FOREGROUND, BACKGROUND, or DETACHED and one of these: SHARE, REUSE, or VIRGIN If FOREGROUND is specified, then the new process can be killed by entering a control-C on the console. FOREGROUND is mainly used by the Command Interpreter. If BACKGROUND is specified, then a "process identifier" is returned that may be used to subsequently "killoff" the child. DETACHED (i.e., neither FOREGROUND nor BACKGROUND) is the default. If SHARE is specified, then the Resource Manager will be willing to start this new process in the same code space as another process executing the same file, if that process was also spawned in SHARE mode. If REUSE is specified, the code space of an earlier process can be reused. VIRGIN means that a new copy must be loaded, and is the default. If the call succeeds, a link of type REQUEST and TELLDEST is given to the Resource Manager; the child may obtain this link by invoking "parline". The caller may receive messages from the child over this link, which has code 0 and channel CHAN14. A returned value of -1 indicates an error. Success is indicated by a return value of 0, except in the case of BACKGROUND mode, when the return value is a "process identifier". 3.11 Fsline (Library Routine) int fsline(); This routine returns the number of a REQUEST link to be used for communication with the File System Process. An error gives a returned value of -1. 3.12 Handler (Service Call) handler(vector,func,chan) (*func)(); The address of a device vector in low core is specified by "vector". The interrupt vector is initialized so that when an interrupt occurs, the specified routine "func" is called at interrupt level. If the interrupt level routine performs an "awaken" call, a message will arrive on channel "chan" with urcode 0 and urnote "INTERRUPT" (see "receive"). Returned values: Success returns a value of 0. -1 means that there have been too many handler calls on that machine. -2 means that the channel is invalid. -3 means that the vector address is unreasonable. -4 means that the vector is already in use. 3.13 **Inline (Library Routine)** ```c int inline(); ``` This routine returns the number of a REQUEST link to be used for subsequent terminal input. The Teletype Driver only allows one input link to be open at any time. An error returns a value of -1. 3.14 **Kill (Service Call)** ```c kill(lifeline); ``` The process indicated by "lifeline" (the return value of a successful "startup" call) is terminated. The lifeline is not destroyed. Returned values: Success returns a value of 0. -1 indicates that the link is invalid or not a "lifeline". Only the Resource Manager and Teletype Driver should use this call. 3.15 **Killoff (Library Routine)** ```c int killoff(procid); ``` This routine asks the Resource Manager to kill a process that the calling process previously created as a BACKGROUND process with a "fork" request. The value returned from that "fork" is "procid". The effect on the dead process is as if it had called "die". 0 is returned for success, -1 for failure. 3.16 Link (Service Call) int link(code, chan, restr) A new link is created. The calling process becomes the new link's owner (forever) and holder (usually not for very long). The caller specifies an integer, "code", which is later useful to the caller to associate incoming messages with that link. The caller also specifies "chan" as one of sixteen possibilities, CHAN1, ..., CHAN16, which are integers containing exactly one non-zero bit. Channels are used to receive messages selectively. CHAN16 should be avoided, for reasons explained in "call". CHAN15 should also be avoided, since the kernel uses it for remote loading. The returned value is the link number that the calling process should use to refer to the link. The argument "restr" is the sum of various restriction bits that tell what kind of link it is. The possibilities are: GIVEALL DUPALL TELLGIVE TELLDUP TELLDEST REQUEST REPLY "GIVEALL" means that any holder may give the link to someone else. "DUPALL" means that any holder may duplicate it (i.e., give it to someone with "dup" = DUP; see "send"). "TELLGIVE", "TELLDUP", and/or "TELLDEST" cause the owner to be notified whenever a holder gives away, duplicates, and/or destroys the link, respectively (see "receive"). A process may duplicate, give away, or destroy a newly created link without restriction and without generating notifications; restrictions and notifications only apply to links received in messages. A link must be either of type "REQUEST" or "REPLY". A REPLY link cannot be duplicated and disappears after one use; a REQUEST link can be used repeatedly unless it is destroyed by its holder. An enclosed link must always be of the opposite type from the link over which it is being sent. Returned values: The normal return value is a non-negative link number. -1 means that the link was specified as either both or neither of REPLY and REQUEST; -2 means that the channel is invalid. 3.17 Load (Service Call) int load(prog,fid,plink,arg) char *prog; This call loads a program. If "fd" is -1, the console operator is requested to load "prog" manually. If "fd" is a valid link number (it should be a link to an open file) and "prog" is -1, the file is loaded on the same machine. In either of these cases, the return value is an "image", to be used for subsequent "startup" or "remove" calls. If "fd" is a link and "prog" is a machine number, the file is loaded remotely on the corresponding machine and started. The arguments "plink" and "arg" have the same meaning as in the "startup" call. The "plink" is automatically given (not duplicated). The return value is a "lifeline", as for a "startup" call. Returned values: 0 is returned on success. -2 and -3 mean that the link "fd" was out of range or had an invalid destination, respectively. -5 means that there wasn't room for the new image. -6 means that there are too many images. -10 means that the caller had no room for the lifeline. -11 means that the "plink" was out of range or had an invalid destination. Only the Resource Manager should use this call. 3.18 **Ltodate (Library Routine)** ltodate(n,s) long n; char s[30]; This library routine converts a long integer, representing the number of seconds since Jan 1, 1973, into a readable character string telling the time, day of the week, and date. Dates later than 1999 are not converted correctly. 3.19 **Nice (Service Call)** nice() This call allows the Roscoe scheduler to run any other runnable process. (Roscoe has a round-robin non-pre-emptive scheduling discipline; "nice" puts the currently running process at the bottom.) It is used to avoid busy waits. 3.20 **Open (Library Routine)** int open(fslink,fname,mode) char *fname; The file named "fname" is opened for reading if "mode" is 0, for writing if "mode" is 1, and for both if "mode" is 2. The argument "fslink" is the caller's link to the File System. The returned value is a link number, used for subsequent "read", "write", and "close" operations. This link may be given to other processes, but not duplicated. -1 is returned on error. 3.21 Outline (Library Routine) int outline(); This routine returns the number of a link to be used for sub- sequent terminal output. An error returns a value of -1. 3.22 Parline (Library Routine) parline(); This routine asks the Resource Manager for a link to the parent of the caller. It assumes that the parent gave the Resource Manager a REQUEST link when it spawned the child. An error returns a value of -1. This call is typically used by a program being run by the Command Interpreter; the parent link (to the Command Interpreter) is used to get the command line arguments. 3.23 Print (Library Routine) int print(file,format, args...) char *format; This routine implements a simplified version of UNIX's "printf". The argument "file" is either a link to an open file or a terminal output link. The input is formatted and then "write" is called. The "format" is a character string to be written, except that two-byte sequences beginning with "%" are treated specially. "%d", "%o", "%c", "%w", and "%s" stand for decimal, octal, character, long integer, and string format, respectively. As these codes are encountered in the format, successive "args" are written in the indicated manner. (Unlike "printf", there are no field widths.) A "%" followed by any character other than the above possibilities disappears, so "%%%" is written out as "%". 3.24 Read (Library Routine) int read(file,buf,size) char *buf; The argument "file" is either a link to an open file or a terminal input link. At most "size" bytes are read into the buffer "buf"; fewer are read if end-of-file occurs. For the terminal, control-D is interpreted as end-of-file. The returned value is the number of bytes actually read. 3.25 Readline (Library Routine) int readline(file,buf,size) char *buf; This routine is the same as "read", except that it also stops at the end of a line. For a file a "newline" character is interpreted as end-of-line; however, "readline" is very inefficient for files. For the terminal, a "line-feed" or "carriage return" terminates a line; the last character placed in the buffer will be "newline" (octal 12). Control-D or control-W will also terminate a line, but they will not be included in the bytes read. The returned value is the number of bytes read. 3.26 Recall (Library Routine) int recall(inmess) char *inmess; If a previous "call" (or "recall") returned a value of -3, meaning that the message did not arrive in 5 seconds, a process can invoke the library routine "recall" to continue waiting. Only the return message buffer is specified (cf. "call"). Returned values: These are the same as for "call", except that -2 and -6 don't apply. 3.27 Receive (Service Call) int receive(chans, urmess, delay) struct urmesg { /* for receiving messages */ int urcode; /* chosen by user, see "link" */ int urnote; /* filled in by Roscoe, see "receive" */ int urchan; /* chosen by user, see "link" */ char *urbody; /* body of incoming message */ int urlnenc; /* index of enclosed link */ } *urmess; The calling process waits until a message arrives on one of several channels, the sum of which is specified by "chans". All other messages remain queued for later receipt. The code and channel of the link for the incoming message are returned in "urmess->urcode" and "urmess->urchan", respectively. The value of "urmess->urnote" is one of five possibilities: DUPPED, DESTROYED, GIVEN, INTERRUPT, or DATA. The first three of these mean that the link's holder has either duplicated, destroyed, or given away the link (see "send" and "link"). INTERRUPT is discussed under "handler". DATA means that the message was sent by "send". The newly assigned link number for the link enclosed with the message is reported in "urmess->urlnenc"; the calling process now holds this link). If no link was enclosed, "urmess.urlnenc" is -1. Before calling "receive", the user sets "urmess->urbody" to point to a buffer of size MSLEN into which the incoming message, if any, will be put. The caller may discard the message by setting "urmess->urbody" to zero. The argument "delay" gives the time in seconds that the calling process is willing to wait for a message on the given channels; a "delay" of 0 means that the call will return immediately if no message is already there, and a "delay" of -1 means that there is no limit on how long the calling process will wait. A process can sleep for a certain amount of time by waiting for a message that it knows won't come (e.g., on an unused channel). Returned values: 0 is returned on success. -1 means the calling process has no room for the enclosed link (the message can be successfully received later), -2 means that the argument "urmess" was bad, -3 means that the waiting time expired. 3.28 Remove (Service Call) remove(image) The code segment indicated by "image", the return value of a successful "load" call, is removed. Only the process that performed a "load" is allowed to subsequently "remove" that image. Returned values: Success returns a value of 0. -1 means that the image either doesn't exist or is in use, or that the caller didn't originally load the image. The Resource Manager uses this call to create space for new images; no other program should use this call. 3.29 Seek (Library Routine) int seek(file,offset,mode) The argument "file" is a link to an open file. The current position in the file is changed as specified by the "offset" and "mode". A value for "mode" of 0, 1, or 2 refers to the beginning, the current position, or the end of the file, respectively. The "offset" is measured from the position indicated by "mode"; it is unsigned if "mode" = 0, otherwise signed. A returned value of 0 indicates success, -1 indicates failure. 3.30 Send (service call) int send(ulink,elink,usmess,dup) struct usmesg { /* for sending messages */ char *usbody; /* body of message to be sent */ } *usmess; This call sends a message along link number "ulink". The address of the message body, a string of MSLEN bytes, lies in "usmess->usbody". If no message is to be sent, "usmess->usbody" is zero. If the caller wishes to pass another link that it holds with the message, it specifies that link's number in "elink" (the "enclosed link"). If there is no enclosure, "elink" should be -1. The use of elinks is restricted in various ways; see "link". The argument "dup" is either "DUP" or "NODUP"; in the first case, the enclosed link is duplicated so that both the sender and receiver will hold links to the same owner; in the second case, the enclosed link is given away so that only the receiver of the mes- sage will hold it. Returned values: 0 is returned on success. -1 means that the ulink number is bad, -2 means that the ulink's destination is not valid (the number is in the right range, but does not correspond to any active link). -3 and -4 have corresponding meanings for the elink. -5 means that the message was bad, -6 means that the elink can't be duplicated, and -7 means that the elink can't be given away. No error is reported if the destination process has terminated; in this case, the message is discarded. 3.31 Setdate (Service Call) setdate(n) long n; This service call sets the wall clock to "n", which is a long integer representing the number of seconds since midnight, Jan 1, 1973. Only the Command Interpreter and Resource Manager use this call. 3.32 Startup (Service Call) int startup(image, arg, plink, dup) This call starts a process whose code segment is indicated by "image", the return value of a successful "load" call. The child is given "arg" as its argument to "main". The child's link number 0 is "plink", a link owned by the caller; this link is either given to the child or duplicated depending on whether "dup" is NODUP or DUP, respectively. The child cannot destroy link 0. Returned values: Success returns a non-negative lifeline number, which can be used for a subsequent "kill". -1 means that the caller had no room for the lifeline. -2 or -3 means that the "plink" was out of range or had an invalid destination, respectively. -4 means that there was no room for the new process's stack. -5 means that the "image" was invalid. Only the Resource Manager should use this call. 3.33 Stat (Library Routine) int stat(fslink, fname, statbuf) char statbuf[36]; This library routine gives information about the file named "fname". The argument "fslink" is the process's link to the File System. An error returns a value of -1. After a successful call, the contents of the 36-byte buffer "statbuf" have the following meaning: struct{ char minor; minor device of i-node char major; major device int inumber; int flags; char nlinks; number of links to file char uid; user ID of owner char gid; group ID of owner char size0; high byte of 24-bit size int size1; low word of 24-bit size int addr[8]; block numbers or device number long actime; time of last access long modtime; time of last modification } *buf; NOTE: Some of these fields are irrelevant, since all Roscoe files have the same owner. 3.34 Time (Service Call) long time(); This service call returns a long integer that may be used for timing studies. The integer is a measure of time in intervals of ten-thousandths of seconds. NOTE: The time wraps around after a full double word (32 bits). 3.35 Unlink (Library Routine) int unlink(fslink,fname) char *fname; This library routine removes the file named "fname"; it cleans up after "create" and "alias". The argument "fslink" is the process's link to the File System. Errors return a value of -1. 3.36 Write (Library Routine) write(file,buf,size) char *buf; The argument "file" is either a link to an open file or a terminal output link. Using this link, "size" bytes are written from the buffer "buf". There are no return values. 4. CONSOLE COMMANDS The Command Interpreter is a utility process that reads the teletype. When the Command Interpreter is awaiting a command, it types the prompt ".". A command consists of a sequence of "arguments" separated by spaces. Otherwise, spaces and tabs are ignored except when included in quotation marks ("'). Within quotes, two consecutive quotes denote one quote; otherwise, quotation marks are deleted. The first "argument" is interpreted as a "command" (see below). Command names may be truncated, provided the result is unambiguous. It is intended that all commands will differ in their first three characters. The following is an alphabetized list of console commands. 4.1 alias <filename1> <filename2> The second indicated file becomes another name for the first indicated file. If either of these is "deleted", the other (logical) copy still exists; however, changes to either affect both. 4.2 background <filename> <arg> The indicated file must be executable. It is started as a BACKGROUND process, with the integer argument "arg". The Command Interpreter prints out the new process's process identifier, which may be used for subsequent "killing" and then gives the next prompt. 4.3 copy <filename1> <filename2> The second indicated file is created with a copy of the contents of the first indicated file. 4.4 delete <filename> The indicated file is deleted. 4.5 directory <filename> Status information for the indicated file is typed. 4.6 help A list of available commands is displayed. 4.7 kill <arg> The indicated argument should be the process identifier returned from a previous "background" command. The process referred to by the process identifier is killed. 4.8 make <filename> The named file is created. Subsequent input is inserted into the file; the input is terminated by a control-D. 4.9 `run <filename> <arg1> <arg2> ...` The indicated file should be an executable file. It is run as a FOREGROUND process. The Resource Manager is given a REQUEST link, which the new process may use to ask for the command line arguments. When the loaded program starts up, the argument to "main" tells the number of command line arguments. To get the individual arguments, the loaded program sends a message to the Command Interpreter (its parent). The first word of the message is ARGREQ, and the second is an integer specifying which argument is desired. The name of the program is argument number 0. The returned message body is the argument, which is a null-terminated string of length at most MSLEN. 4.10 `set <modelist>` or `SET <modelist>` This command changes the console input modes. The mode list is a sequence of keywords "x" or "-x", where "x" can be any of the following: - `upper` (the terminal is upper case) - `echo` (the terminal echoes input) - `hard` (the terminal is hard-copy) - `tabs` (the terminal has hardware tabs) Keywords may be abbreviated according to the same rules as commands. The format "x" turns on the corresponding mode, "-x" turns it off. (UPPER is recognized for upper; "lower" means "-upper"). For more information, see the section "CONSOLE INPUT PROTOCOLS". 4.11 time <format> If a format is given (as "yymmddhhmm"), the wall clock is set to that time, and printed. With no argument, "time" prints the wall clock time. 4.12 type <filename> The indicated file is typed. 5. CONSOLE INPUT PROTOCOLS The Teletype Driver performs interrupt-driven I/O, which allows for typing ahead. Also, the following characters have special meanings: - Control-C: kill the running program (but don't kill the command interpreter itself) - Control-D: end of file (terminates a "read" or "readline") - Control-W: end of line (but no character sent) - line-feed: end of line - carriage return: end of line - rubout: erase last character (unless line empty) - Control-X: erase current line - escape: next character should be sent as is In "echo" mode, input is echoed, otherwise not. In "hard" mode, output is designed to be legible on hard copy devices; otherwise the Teletype Driver assumes that the cursor can move backward, as on a CRT. In "tabs" mode, advantage is taken of hardware tabs on the terminal. In "upper" mode, the terminal is assumed to only have upper case. Input is converted to lower case, unless escaped. Upper case characters are printed and echoed with a preceding "!". Escaped [ , ], @, ^, and \ are converted to { , }, ` , `, and |, respectively, and the latter are similarly indicated by preceding "!"s. 6. UTILITY PROCESS PROTOCOLS This section describes the protocols that user programs must follow to communicate with the utility processes when the library routines described earlier are inadequate. Four utility processes are the Resource Manager, the File System Process, the Teletype Driver, and the Command Interpreter. The Resource Manager keeps track of which programs are loaded and/or running on the local machine. The kernel and the Resource Manager reside on each machine. The Teletype Driver governs I/O on the console; the Command Interpreter interprets console input. The File System Process implements a file system by communicating with the PDP-11/40. It need not exist on every machine. During Roscoe initialization, one Resource Manager is started. It loads a full complement of utility processes (the Teletype Driver, Command Interpreter, and File System Process) on its machine and various utility processes on the other machines. When a particular Resource Manager is not given a local Teletype Driver or File System Process, it shares the one on the initial machine. 6.1 Input/Output Protocols This section describes the message formats used for communicating with the File System and Teletype Driver Processes. A program that explicitly communicates with the File System Process or Teletype Driver must include the header files "filesys.h" and "ttdriver.h", which define the necessary structures. To open an input or output line to the terminal, to change the modes on the terminal, or to inform the teletype of whom it should kill when encountering a control-C, a message is sent over the terminal link of the following form: ```c struct ttinline{ char tticom; char ttisubcom; char ttimodes; } ``` "tticom" is either OPEN, STTY, MODES, or TOKILL. In the case of OPEN, "ttisubcom" is either READ or WRITE, and the return message has the new link enclosed. In the case of STTY, "ttimodes" tells what the new modes should be (a bit-wise sum of ECHO, TABS, HARD, and UPPER). In the case of MODES (to find out the current modes), the return message has the modes in "ttimodes". In the case of TOKILL (to inform the Teletype Driver which process to kill on receipt of control-C), the message encloses a lifeline. To open, create, unlink, alias, or get status information on a file, a message is sent over the file system link in the following form: ```c struct ocmsg{ int ocaction; int oclength; int ocmode; } ``` "ocaaction" is either OPEN, CREATE, UNLINK, ALIAS, or STAT. "oclength" tells the length of the file name; in the case of ALIAS, this field contains the concatenation of two file names. "ocmode" is the mode for OPEN or CREATE; in the case of ALIAS, it holds the length of the first file name. The file system sends back a message with an enclosed link, over which the file name is sent. This message again has an enclosed link for the File System Process's next response. In the cases of OPEN or CREATE, a successful return contains a valid enclosed link; for UNLINK, STAT, or ALIAS, there is no enclosed link. In the case of STAT, the return message has the structure of a "rdmesg" as in the case of READ below; the first word is 36 for success, -1 for failure, and the next 36 bytes of the message are the result of the stat. In all other cases, the first word is 0 on success, -1 on failure. For either the terminal or the file system, reading or writing is done by sending a message of the following form: ```c struct fsmesg{ int fsaction; int fslength; char fstext[MSLEN-4]; } ``` "fsaction" should be either READ, READLINE, or WRITE. "fslength" tells how many bytes are intended to be read, or are being sent to be written. In the case of WRITE, the text is sent in subsequent messages, and nothing is returned. In the cases of READ or READLINE, the response is of the following form: ```c struct rdmesg{ int rdlength; /* amount actually read */ char rdtext[MSLEN-2]; } ``` The maximum allowable read is size MSLEN-2. To perform a seek on an open file, send a message to the file system of the following form: struct skmsesg { int skaction; /* should be SEEK */ int skoffset; int skmode; } Any enclosed link in the return message indicates success, and should be immediately destroyed. 6.2 Resource Manager Protocols Processes that communicate explicitly with the Resource Manager must include the header file "resource.h". The following structure is declared there: struct rmmesg { /* messages to Resource Managers */ int rmreq; /* type of request */ int rmarg; /* various miscellaneous arguments */ int rmmode; /* the mode for STARTs or KILLs */ } The Resource Manager keeps track of which images (code segments) and processes exist. A separate Resource Manager runs on each machine in the network; these programs communicate with each other, but are relatively independent. Each Resource Manager holds a terminal link and file system link, which are either for local utility processes or else links received from the first Resource Manager initialized. Whenever a Resource Manager has a local terminal it also has a local command interpreter. There are three kinds of processes: FOREGROUND, BACKGROUND, and DETACHED. When a process is started, its link 0 is owned by the local Resource Manager, to whom all of this process's requests are directed. The first FOREGROUND process for any terminal is always the Command Interpreter, which initially "has the ball". Each terminal always has one FOREGROUND process that "has the ball". The process "with the ball" may create another FOREGROUND process, which means that the child now "has the ball". The meaning of "having the ball" is that a control-C entered on the corresponding terminal will terminate the process. When the process "with the ball" terminates, its parent then "recovers the ball", and will be terminated by the next control-C. If one of the processes in this FOREGROUND chain terminates, the chain is re-linked appropriately. The command interpreter is an exception in that control-C's have no effect on it. A process may also create another process as a BACKGROUND process. In this case, the child's process identifier is returned to the parent, and later the parent can use this identifier to terminate the child. These identifiers are assigned by the Resource Manager, and are distinct from the process identifiers used in the kernel. A DETACHED process cannot be terminated by either method. A user may make five kinds of requests on its Resource Manager: 1. RMTTREQ Request The Resource Manager is requested to give the requestor a link to the requestor's terminal. This link will be sent over the enclosed link in the request, which should therefore be a REPLY link. 2. RMFSREQ Request The Resource Manager duplicates its file system link and sends it back over the enclosed link in the request, which should therefore be a REPLY link. 3. RMSTART Request The Resource Manager will start a process, using the link enclosed with this request for two purposes: 1) to respond to the request (see conditions for response below), or 2) to save it and give to the child if the child asks for it (see RMPLINK below). The caller must be careful, of course, not to give a REPLY link if both uses are intended. Also, the caller must make the enclosed link GIVEALL if the Resource Manager should try to load the process on another machine, rather than giving up if it doesn't fit on the local one. The RMSTART request also specifies the file name and an integer argument to be given to the child when it starts. The caller also specifies a "mode" for starting the child, which is a combination of bits with various meanings. The user should specify either BACKGROUND, FOREGROUND, or DETACHED (the default is DETACHED). FOREGROUND is only allowed if the requester currently "has the ball" for its terminal. The user should specify either SHARE, REUSE, or VIRGIN (the default is VIRGIN). These alternatives are described above (see "fork"). The user should also specify either GENTLY or ROUGHLY (the default is GENTLY). If GENTLY, the Resource Manager will first try to load it locally without throwing out any other unused images, and then will try to do the same on other machines. When this fails, or if ROUGHLY was specified, it tries to make room locally for the new process, and then tries to do so on other machines. The user should also specify either ANSWER or NOANSWER (the default is NOANSWER). If ANSWER is specified, or if BACKGROUND was specified, then the Resource Manager sends a reply over the enclosed link. The first word of the reply is the return code; -1 always means failure; 0 means success except in the case of BACKGROUND, when the value returned is the process identifier of the child. An existing code segment is reusable if the filename still refers to an existing publicly executable load format file that has not been modified since the copy in question was loaded. Any number of processes may share a code segment. The terminal associated with a child process is always the same as the one associated with its parent; the command interpreter is loaded with a terminal during initialization. 4. RMKILL Request The Resource Manager kills the process whose process identifier is given as part of the request. The request may enclose a link that is used to give a one-word acknowledgement of success or failure if the request specifies ANSWER (as in RMSTART, described above). The process being killed must of course be BACKGROUND, and only the process that started it is allowed to kill it. 5. RMPLINK Request The Resource Manager returns the link that was originally en- closed with the request that started this process. It is returned over the link enclosed with the RMPLINK request, which must therefore be of the proper type, whichever that may be. ACKNOWLEDGEMENTS The authors would like to acknowledge the assistance of the following graduate students who have been involved in the Roscoe project: Jonathan Dreyer, Jack Fishburn, Michael Horowitz, Will Leland, Paul Pierce, and Milo Velimirovic. Their hard work has helped Roscoe to reach its current level of development and will be essential in completing its design and implementation. REFERENCES Finkel, R. A., Solomon, M. H., The Roscoe Kernel, University of Wisconsin -- Madison Computer Sciences Technical Report #337, September 1978. Ritchie, D. M., C Reference Manual, Unpublished memorandum, Bell Telephone Laboratories, Ritchie, D. M., Thompson, K., "The UNIX Time-Sharing System", Communications of the ACM, Vol. 17, No 7, pp. 365-375, July 1974. Solomon, M. H., Finkel, R. A., ROSCOE -- a multiminicomputer operating system, University of Wisconsin -- Madison Computer Sciences Technical Report #321, September 1978. Tischler, R. L., Finkel, R. A., Solomon, M. H., Roscoe Utility Processes, University of Wisconsin -- Madison Computer Sciences Technical Report #338, September 1978.
{"Source-Url": "http://research.cs.wisc.edu/techreports/1978/TR336.pdf", "len_cl100k_base": 12733, "olmocr-version": "0.1.50", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 50292, "total-output-tokens": 14965, "length": "2e13", "weborganizer": {"__label__adult": 0.00026702880859375, "__label__art_design": 0.0003452301025390625, "__label__crime_law": 0.00018513202667236328, "__label__education_jobs": 0.0009889602661132812, "__label__entertainment": 9.304285049438477e-05, "__label__fashion_beauty": 0.0001099705696105957, "__label__finance_business": 0.00022792816162109375, "__label__food_dining": 0.00024211406707763672, "__label__games": 0.0008778572082519531, "__label__hardware": 0.005950927734375, "__label__health": 0.00020647048950195312, "__label__history": 0.0002932548522949219, "__label__home_hobbies": 0.0001099705696105957, "__label__industrial": 0.0005121231079101562, "__label__literature": 0.0002236366271972656, "__label__politics": 0.0001468658447265625, "__label__religion": 0.0004127025604248047, "__label__science_tech": 0.040740966796875, "__label__social_life": 6.139278411865234e-05, "__label__software": 0.0269317626953125, "__label__software_dev": 0.92041015625, "__label__sports_fitness": 0.0001838207244873047, "__label__transportation": 0.0004024505615234375, "__label__travel": 0.00013446807861328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55777, 0.03309]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55777, 0.44613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55777, 0.88098]], "google_gemma-3-12b-it_contains_pii": [[0, 136, false], [136, 793, null], [793, 3514, null], [3514, 5466, null], [5466, 6651, null], [6651, 8007, null], [8007, 9356, null], [9356, 10924, null], [10924, 12533, null], [12533, 13982, null], [13982, 15428, null], [15428, 16536, null], [16536, 17813, null], [17813, 18889, null], [18889, 20366, null], [20366, 21605, null], [21605, 22441, null], [22441, 23784, null], [23784, 25000, null], [25000, 25989, null], [25989, 27351, null], [27351, 28724, null], [28724, 29966, null], [29966, 31174, null], [31174, 32361, null], [32361, 33809, null], [33809, 35302, null], [35302, 36705, null], [36705, 37921, null], [37921, 39201, null], [39201, 39955, null], [39955, 41161, null], [41161, 41797, null], [41797, 43100, null], [43100, 44417, null], [44417, 45826, null], [45826, 47223, null], [47223, 48550, null], [48550, 49894, null], [49894, 51248, null], [51248, 52755, null], [52755, 54157, null], [54157, 55072, null], [55072, 55777, null]], "google_gemma-3-12b-it_is_public_document": [[0, 136, true], [136, 793, null], [793, 3514, null], [3514, 5466, null], [5466, 6651, null], [6651, 8007, null], [8007, 9356, null], [9356, 10924, null], [10924, 12533, null], [12533, 13982, null], [13982, 15428, null], [15428, 16536, null], [16536, 17813, null], [17813, 18889, null], [18889, 20366, null], [20366, 21605, null], [21605, 22441, null], [22441, 23784, null], [23784, 25000, null], [25000, 25989, null], [25989, 27351, null], [27351, 28724, null], [28724, 29966, null], [29966, 31174, null], [31174, 32361, null], [32361, 33809, null], [33809, 35302, null], [35302, 36705, null], [36705, 37921, null], [37921, 39201, null], [39201, 39955, null], [39955, 41161, null], [41161, 41797, null], [41797, 43100, null], [43100, 44417, null], [44417, 45826, null], [45826, 47223, null], [47223, 48550, null], [48550, 49894, null], [49894, 51248, null], [51248, 52755, null], [52755, 54157, null], [54157, 55072, null], [55072, 55777, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55777, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55777, null]], "pdf_page_numbers": [[0, 136, 1], [136, 793, 2], [793, 3514, 3], [3514, 5466, 4], [5466, 6651, 5], [6651, 8007, 6], [8007, 9356, 7], [9356, 10924, 8], [10924, 12533, 9], [12533, 13982, 10], [13982, 15428, 11], [15428, 16536, 12], [16536, 17813, 13], [17813, 18889, 14], [18889, 20366, 15], [20366, 21605, 16], [21605, 22441, 17], [22441, 23784, 18], [23784, 25000, 19], [25000, 25989, 20], [25989, 27351, 21], [27351, 28724, 22], [28724, 29966, 23], [29966, 31174, 24], [31174, 32361, 25], [32361, 33809, 26], [33809, 35302, 27], [35302, 36705, 28], [36705, 37921, 29], [37921, 39201, 30], [39201, 39955, 31], [39955, 41161, 32], [41161, 41797, 33], [41797, 43100, 34], [43100, 44417, 35], [44417, 45826, 36], [45826, 47223, 37], [47223, 48550, 38], [48550, 49894, 39], [49894, 51248, 40], [51248, 52755, 41], [52755, 54157, 42], [54157, 55072, 43], [55072, 55777, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55777, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
0f029584ee4e1b4a0bc159d9ab61fdc33e8023f1
Defining multi-tenancy A systematic mapping study on the academic and the industrial perspective Kabbedijk, Jaap; Bezemer, Cor Paul; Jansen, Slinger; Zaidman, Andy DOI 10.1016/j.jss.2014.10.034 Publication date 2015 Document Version Submitted manuscript Published in Journal of Systems and Software Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Copyright Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim. Abstract Software as a service is frequently offered in a multi-tenant style, where customers of the application and their end-users share resources such as software and hardware among all users, without necessarily sharing data. It is surprising that, with such a popular paradigm, little agreement exists with regard to the definition, domain, and challenges of multi-tenancy. This absence is detrimental to the research community and the industry, as it hampers progress in the domain of multi-tenancy and enables organizations and academics to wield their own definitions to further their commercial or research agendas. In this article, a systematic mapping study on multi-tenancy is described in which 761 academic papers and 371 industrial blogs are analysed. Both the industrial and academic perspective are assessed, in order to get a complete overview. The definition and topic maps provide a comprehensive overview of the domain, while the research agenda, listing four important research topics, provides a roadmap for future research efforts. Keywords: Multi-tenancy, Systematic Mapping Study, Definition, Academic Perspective, Industrial Perspective 1. Introduction An ongoing growing influence of cloud computing and Software-as-a-Service (SaaS) can be observed in the enterprise software domain (Forbes, One of the key features of SaaS is the ability to share computing resources in offering a software product to different customers. To benefit from this ability, the architecture of SaaS products should cater for the sharing of software instances and databases. A popular architectural style for achieving this is known as Multi-Tenancy. The concept of multi-tenancy, within the software architecture community, is usually referred to as the ability to serve multiple client organizations through one instance of a software product and can be seen as a high level architectural pattern in which a single instance of a software product is hosted on the software vendor’s infrastructure, and multiple customers access the same instance (Bezemer et al., 2010). The specific method for sharing instances (e.g. reentrancy or queueing) is generally not specified within the multi-tenancy pattern. Multi-tenancy allows for the customization of the single software instance according to the varying requirements of many customers (Kwok et al., 2008), contrasting with the multi-user model in which there is no substantial variability (Bezemer and Zaidman, 2010). Also, multi-tenancy is one of the key factors for achieving higher profit margins by leveraging the economies of scale (Guo et al., 2007). Multi-tenancy has evolved from a number of previous paradigms in information technology. More concretely, starting in the 1960s companies performed time-sharing, they rented space and processing power on mainframe computers to reduce computing expenses; often they also reused existing applications (Wilkes, 1975). Around 1990 the application service provider (ASP) model was introduced, where ASPs hosted applications on behalf of their customers. ASPs were typically forced to host applications on separate machines or as separate processes (Smith and Kumar, 2004). Finally, the multi-user model is most-known from popular consumer-oriented web applications (e.g. Facebook) that are functionally designed as a single application instance that serves all customers (Bezemer and Zaidman, 2010). Multi-tenant applications represent a natural evolution from these previous paradigms. Similarly, around the year 2000, Bennett et al. (2000) set out a vision for service-based software applications, in which they note a number of essential ingredients for what we now call multi-tenancy, namely: demand-led provisioning of software services and a high degree of personalization of software. In the domain of software (and hardware) systems, the topic of multi-tenancy in scientific literature appeared relatively recently, with the first explicit mention of the term in a paper by Chong and Carraro (2006) in the Within multi-tenancy, the hardware and software infrastructure is shared and a hosted application can serve user requests from multiple companies concurrently (Guo et al., 2007). Multi-tenancy is regarded a key attribute of well-designed SaaS applications by Chong and Carraro, who developed a commonly used maturity model of SaaS that distinguishes four maturity levels. The last two maturity levels in this model describe multi-tenancy, rendering it as a requirement for a mature SaaS application. Multi-tenancy is not confined to specific resources, but is applicable at different levels in a system’s architecture, for example on a database or instance level. As a result, various approaches to a multi-tenant architecture are possible (Osipov et al., 2009; Natis, 2008). Most academics and practitioners agree multi-tenancy enables software vendors to serve multiple customers from a single online product, but specific implementations differ significantly, leading to an indistinct understanding of the different levels to which multi-tenancy can be applied. This varying definition of multi-tenancy is confusing among academics and practitioners, but it also complicates the communication between them, caused by the different understanding of multi-tenancy among them. Oracle, for example, looks at multi-tenancy primarily from a database perspective (Oracle, 2009), while Microsoft looks at multi-tenancy more from a functional perspective (Microsoft, 2012). The goal of this paper is to chart and bridge these varying definitions and the views from both industry and academics on multi-tenancy. First, there is a need for an overview of the different definitions of multi-tenancy, followed by a clear analysis of what is shared among the different definitions. Having such an overview will improve the understandability of multi-tenancy and allows parties to be more aware of the varying nature of the definitions on multi-tenancy at this moment. Establishing common ground also allows us to define research challenges to guide future research in the domain of multi-tenancy. This paper aims at satisfying these needs by performing a structural search in academic literature and blog posts, as described in Section 2. All search data is analysed (Section 3) and an overview of the results can be found in Section 4. The different perspectives on multi-tenancy emerging from the results are synthesized to one overarching definition (Section 5). To structure future research, a research agenda containing seven areas of interest is proposed (Section 6), followed by a conclusion and discussion in Section 8. 2. Research Method In order to get an overview of the current state of multi-tenancy literature and get insight on the interpretation of multi-tenancy from different perspectives a set of research questions has been constructed. The main research question (RQ) is as follows: RQ: How to characterize multi-tenancy? The main research question is addressed by answering the sub research questions (SubRQs) listed below. Each question focusses on a different perspective on the characterization of multi-tenancy. SubRQ1: What comprehensive definition for multi-tenancy can be constructed based on current literature? Rationale: Multi-tenancy is not a new concept, and many different definitions already exist. Since these definitions may reflect different perspectives on a software product and focus on different elements, an overall definition should be developed. SubRQ2: How is multi-tenancy interpreted in academia and industry? Rationale: The use or understanding of the concept of multi-tenancy in industry could differ from the common use in academia. This possible chasm between academia and industry inhibits cooperation and communication between both domains. To examine this, not only academic papers are analyzed, but also 300 internet blog results are used to be able to compare uses in both domains. SubRQ3: What future research topics can be defined based on current literature? Rationale: Since the domain of multi-tenancy research is rather young and scattered, there is a need for guidance on future research. Several research topics are distilled from the academic literature. The questions are answered based on the academic papers and public blogs aggregated by the systematic search and selection process that is followed in this research. Two different datasets are gathered and analyzed using a Systematic Mapping Study (SMS) approach. The first dataset is gathered from within the academic domain, while the second dataset is composed from blogs from the industry domain. An SMS is the appropriate method when trying to answer a general research question on a certain topic (Kitchenham et al., 2010) and provides a detailed overview of the topic. A previous paper by Anjum and Budgen (2012) was used as a guideline for reporting the mapping study. 2.1. Academic Literature Collection In order to identify, evaluate and interpret the available literature relevant to a particular topic in an unbiased, objective and systematic way, common practice is to perform a Systematic Literature Review (SLR) (Budgen et al., 2008). The proper execution of an SLR is still something that is not done frequently in the field of Software Engineering (SE) (Kitchenham et al., 2009). This is probably caused by the fact that an SLR is time-consuming and should be performed rigorously within a mature research domain. However, if little evidence exists or the topic is too broad or scattered, then a Systematic Mapping Study (SMS) is the appropriate method (Kitchenham, 2004). An SMS is used to map the field of a certain topic, instead of answering a specific research question (Petticrew and Roberts, 2009). Since the research domain of multi-tenancy is not mature yet and initial search shows definitions differ significantly, this study uses an SMS to get a overview of the concept of multi-tenancy. This paper presents an SMS in which the different perspectives on multi-tenancy are examined. The systematic mapping study was performed according to the phases described by Peterson et al. (Petersen et al., 2008). First, a search for relevant publications was performed, second a classification scheme was constructed, and third, the publications were mapped. The details of the different steps are described below. The first phase consisted of literature retrieval. The steps and the resulting dataset size are as follows: 1. Search Execution — Dataset retrieval from using the search query on the following databases: ACM, CiteSeerX, IEEE, ISI, Science Direct, Scopus, SpringerLink, and Wiley. Since Google Scholar aggregates from all the databases listed, it was excluded from the search to minimize the number of duplicates. The search has been performed using the following keyword query: “multi-tenancy” OR “multi-tenant” OR multitenancy OR multitenant OR “multi tenancy” OR “multi tenant” 2. **Paper Screening** consists of a check for completeness, relevance, and compliance to the inclusion and exclusion criteria. Included papers are peer reviewed academic papers. Excluded are non-English papers and duplicates not identified in the previous step. 3. **Filtering on Title and Year** — Deletion of papers written before 2000 because the term multi-tenancy in this field was non-existent before that year. Papers describing multi-tenancy unrelated to IT (e.g. related to housing) are excluded. 4. **Filtering on Abstracts** — Papers that merely use the term but do not actively discuss multi-tenancy are removed as well. 5. **Filtering on Full Text** — The final selection was based on the criteria that the paper must either explicitly state a multi-tenancy definition or refer to one instead. The results of conducting all five steps were systematically logged in a central database accessible by all authors. After each step, 10% of all papers have been selected by querying every 10th entry in the database, and checked for inter-rater agreement by all authors. If a paper was rated differently by another author, the discrepancy was discussed and corrected. When more than one discrepancy was identified, the step was redone. This inter-rater agreement check was done in order to ensure construct validity of the data gathering (Eisenhardt, 1989). 2.2. **Industrial Literature Collection** The gathering of industrial literature (i.e. blogs), was performed in order to provide a sanity check for the academic literature. The results were not used explicitly for the construction of the multi-tenancy definition or research agenda, but serve to examine potential different interpretations of multi-tenancy between industry and academia. For the industrial perspective of this survey, we have mirrored the process of the Systematic Mapping Study for scientific literature. We use the same three phases that Petersen et al. (2008) describe for the traditional SMS, being: 1. **Search Execution** — Consists of dataset retrieval from using the search query. We use the same search query as for the scientific literature, but this time applied it to the traditional Google search and the Google Blog search (www.google.com/blogsearch). The search string used was: “multi-tenancy” OR “multi-tenant” OR multitenancy OR multitenant OR “multi tenancy” OR “multi tenant” The search results are limited to the first 300 results of the traditional Google Search and to 100 of the Google Blog search. This cut-off is instigated to keep the results manageable, but we also found that around these thresholds the search results become increasingly relevant (e.g., the traditional Google search started returning results that were not-related to multi-tenancy in the area of computer science). 2. Website Categorization — The first 100 entries of the traditional Google search are screened and subsequently the second and fourth authors of the paper established an initial categorization of the web sites that were encountered. The categorization is first performed by both authors independently, after which the initial sets are compared and discussed. Based on discussion, the final set is constructed. Having a website categorization, makes it easier to understand the importance of multi-tenancy in industry and how we could learn from these web sites when considering how multi-tenancy is defined and used in industry. 3. Inter-rater agreement — The categorization of the websites is done by the second and fourth author. Both of them categorize half of the website entries. In order to achieve inter-rater agreement 10 websites entries from the second author and another 10 from the fourth author were exchanged and re-classified by the other. 4. Investigation of Full Text — Because a web site typically does not have the same structure as a scientific paper, we screened the full text of each web site in full in order to determine (1) whether the search result is within the scope of this study and (2) in which category the website should be placed. The scope was determined to be everything related to IT. Whenever differences existed in the classification done by the second and fourth author, an agreement is reached through discussion. The classification result and similar classifications are adjusted according to the new joint interpretation. 3. Classification 3.1. Academic Literature Classification In this section, the analysis of the academic literature is illustrated. An overview of the results per phase in the systematic mapping study is presented below, followed by a top-down approach for the literature analysis. 1. Search Execution — The search resulted in 1371 papers. After duplicate removal based on title, a database of 761 papers was created. 2. Paper Screening — This phase resulted in 672 applicable papers. 3. Filtering on Title and Year — Resulted in 259 applicable papers. 4. Filtering on Abstracts — After filtering, 92 applicable papers were identified. 5. Filtering on Full Text — This resulted in 48 applicable papers. After checking for the inter-rater agreement in each step, small discrepancies between the raters were found. None of the steps, however, had a discrepancy larger than one paper, which meant none of the steps had to be redone. The small level of discrepancy can be explained by the fact both authors are knowledgeable in the area of multi-tenancy and already knew many of the papers published within this domain. Different publication types are discussed in Figure 1, showing an overview of the different paper publication outlet types. Conferences clearly play a dominant role in publishing papers on multi-tenancy (27 papers), followed by journals (18 papers). Only three papers were found in workshops. ![Figure 1: Publication outlets for academic articles on multi-tenancy](image) To further investigate the state of the art in the scientific literature an analysis on the research was performed as well as classification by research type. This overview is useful for identifying gaps in current literature. To classify the type of research approach, six existing distinct research categories... Table 1: Categorization of 48 papers, listing the number of occurrences (N) for each type of paper encountered. <table> <thead> <tr> <th>Category</th> <th>Description</th> <th>N</th> </tr> </thead> <tbody> <tr> <td>Solution Proposal</td> <td>Proposes a solution with arguments for its relevance without an evaluation in practice but a proof-of-concept is acceptable.</td> <td>26</td> </tr> <tr> <td>Validation Research</td> <td>Investigates an existing solution and validates it by using a sound scientific approach.</td> <td>10</td> </tr> <tr> <td>Evaluation Research</td> <td>Investigation of a problem or implementation of a technique in practice.</td> <td>6</td> </tr> <tr> <td>Philosophical Paper</td> <td>Introduces a new view on a subject, a new concept, conceptual framework.</td> <td>5</td> </tr> <tr> <td>Experience Paper</td> <td>Explains why or how something has been done in practice. For example lessons learned from projects.</td> <td>1</td> </tr> <tr> <td>Opinion Paper</td> <td>Contains an author’s opinion on a subject.</td> <td>0</td> </tr> </tbody> </table> were used (Wieringa et al., 2009). An overview of these type of research approaches is presented in Table 1. Papers were classified using an evolutionary approach, where subjects are selected based on title, abstract and keywords. Papers are categorized and categories are evolved throughout the review using splitting and merging. The analysis of the results focuses on presenting the frequencies of publications for different research categories. An overview of popular and less popular categories can be used to identify gaps and possibilities for future research. It also provides a picture about the nature of the scientific material and the maturity of the field. The results from this analysis are depicted in Table 2. Please note the last research category (i.e. Opinion Paper) is not included in the table, since no papers were part of this category. The list of topics is based on the abstracts of the papers and the keywords listed. It is possible one paper discusses multiple topics, in which case it is listed on all of these topics. A paper, however, is always part of only one research category. 3.2. Industrial Literature Classification This section presents the results of the industry literature gathering per phase, followed by a discussion of the analysis. 1. Search Execution — Among the results were a number of scientific papers, all of which were also part of our search for scientific literature. Table 2: Multi-tenancy research topics per research category <table> <thead> <tr> <th></th> <th>Evaluation Research</th> <th>Solution Proposal</th> <th>Validation Research</th> <th>Philosophical Paper</th> <th>Experience Paper</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>SaaS</td> <td>4</td> <td>19</td> <td>6</td> <td>2</td> <td>1</td> <td>32</td> </tr> <tr> <td>Architecture</td> <td>4</td> <td>13</td> <td>7</td> <td>3</td> <td>1</td> <td>28</td> </tr> <tr> <td>Implementation</td> <td>2</td> <td>8</td> <td>2</td> <td>2</td> <td>1</td> <td>15</td> </tr> <tr> <td>Database</td> <td>-</td> <td>4</td> <td>6</td> <td>2</td> <td>1</td> <td>13</td> </tr> <tr> <td>Balancing &amp; Placement</td> <td>2</td> <td>6</td> <td>2</td> <td>3</td> <td>-</td> <td>13</td> </tr> <tr> <td>Variability</td> <td>1</td> <td>8</td> <td>1</td> <td>-</td> <td>1</td> <td>11</td> </tr> <tr> <td>Infrastructure</td> <td>1</td> <td>5</td> <td>3</td> <td>1</td> <td>-</td> <td>10</td> </tr> <tr> <td>Industry Evaluation</td> <td>1</td> <td>4</td> <td>1</td> <td>2</td> <td>1</td> <td>9</td> </tr> <tr> <td>Quality Assurance</td> <td>1</td> <td>6</td> <td>1</td> <td>-</td> <td>-</td> <td>8</td> </tr> <tr> <td>Platform Development</td> <td>-</td> <td>4</td> <td>2</td> <td>1</td> <td>-</td> <td>7</td> </tr> <tr> <td>Security</td> <td>-</td> <td>3</td> <td>1</td> <td>2</td> <td>-</td> <td>6</td> </tr> <tr> <td>Standards</td> <td>-</td> <td>3</td> <td>-</td> <td>2</td> <td>-</td> <td>5</td> </tr> <tr> <td><strong>Total</strong></td> <td><strong>16</strong></td> <td><strong>83</strong></td> <td><strong>32</strong></td> <td><strong>20</strong></td> <td><strong>6</strong></td> <td></td> </tr> </tbody> </table> After removing duplicates, this resulted in 371 entries. 2. **Website Categorization** — Eight categories were identified, as shown in Table 3. The first half of the websites was categorized by the second author, the second half was categorized by the fourth author. 3. **Inter-rater agreement** — To validate the choice of categories and evaluate the categorization process, a random sample (N=12) of websites was categorized by both the second and fourth author and compared afterwards. Small changes existed in the classification, mainly due to different interpretation of the categories. In 75% (9/12) of the cases, both authors completely agreed on the categorization (average of 2.33 categories per website). In the three other cases, they at least partly agreed on the categorization. Considering a website can be categorized in a subset of unknown size of 8 different categories, we considered this to be a good level of inter-rater agreement. 4. **Investigation of Full Text** — All of the 371 entries appeared to be relevant to the concept of multi-tenancy in IT. As mentioned in Section 2.2, we started out by analyzing the first 100 entries returned by Google to create an initial categorization of search results. Small changes to the categorization were made while analyzing all search entries. The final categories that we ended up with are listed in Table 3. Table 3 also describes the criteria that we used for the categorization process. Note that we tried to distinguish “corporate opinions” from “individual opinions” as much as possible, hence the many different categories. From the initial search results we removed duplicates, and excluded 14 academic papers and dead website links. This resulted in 371 search entries being investigated, divided over the aforementioned categories. An overview can be seen in Table 3. It should be noted that some search results were categorized in multiple categories, for example, a corporate blog might also contain an explicit advertisement for the product being described. 4. Observations This section presents a set of observations, based on the results of the Academic and Industrial result classification. All observations were discussed among all four authors and adapted if needed. The observations do not aim to provide a complete list, but rather give a representative illustration of the multi-tenancy domain. 4.1. Academic Paper Results Based on the paper classification in Section 3.1, the following observations are made: Observation 1: Conference oriented — As Figure 1 shows, around 56% of all research papers on multi-tenancy are published in conference proceedings, compared to 37.5% in journal publications and only around 6.5% in workshop proceedings. The accent on conference publications is not uncommon in the IT domain, but the lack of workshop publications is striking. One such distribution could indicate a very mature research domain, but considering the novelty of multi-tenancy and number of papers published this is unlikely. A more plausible cause is that the domain of multi-tenancy research has no strong community yet and workshops still have to be formed, causing researchers to submit results to conferences and journals, which often have a broader scope. Observation 2: Many proposals, lack of experience — Table 2 shows a strong emphasis on solution proposals and only one paper reporting <table> <thead> <tr> <th>Category</th> <th>Description</th> <th>N</th> </tr> </thead> <tbody> <tr> <td>Non-corporate blog</td> <td>A software engineer or technology expert writing about multi-tenancy. No (corporate) affiliation is mentioned or could be retrieved.</td> <td>117</td> </tr> <tr> <td>Corporate blogs</td> <td>White papers mentioning multi-tenancy. This category consists of web pages that are either hosted by a corporation or that explicitly state that the author or text was written from a specific company’s perspective. It does not directly advertise the services of the company with regard to multi-tenant technologies, but it describes the company’s vision on multi-tenancy.</td> <td>84</td> </tr> <tr> <td>Howto</td> <td>Web page describing how to implement multi-tenancy. No corporate affiliation or link to a specific product is mentioned.</td> <td>82</td> </tr> <tr> <td>Advertisement</td> <td>Web page advertising a product or service related to multi-tenancy.</td> <td>81</td> </tr> <tr> <td>Evangelism</td> <td>Web page containing a strong opinion either in favor or against multi-tenancy.</td> <td>79</td> </tr> <tr> <td>Definition</td> <td>Web page containing a definition (or a discussion on the definition) of multi-tenancy.</td> <td>38</td> </tr> <tr> <td>Support forum</td> <td>Forum discussing multi-tenancy. This forum can be product-specific or product-agnostic. Some support forums are hosted by corporations, others are hosted by StackOverflow, Google Groups, etc.</td> <td>36</td> </tr> <tr> <td>Product manual</td> <td>Web page describing how to use a multi-tenancy oriented product or service. This category of websites can be linked to a specific product or service.</td> <td>18</td> </tr> </tbody> </table> Table 3: Categorization of 371 Google search entries, listing the number of occurrences (N) on industrial experiences. This imbalance indicates that the research domain is still not mature, and that most of the solutions proposed have not yet been implemented or evaluated. The large difference can also signal the lack of cooperation between industry and academia. **Observation 3: Architecture and SaaS play a big role** — Unsurprisingly, the topics of SaaS (32 papers) and architecture (28 papers) are addressed a lot in multi-tenancy research. Multi-tenancy is clearly positioned as an architectural tactic for online software. Since SaaS and architecture refer to the entire software stack, this observation also shows that research focusses on the complete software product instead of just one level (e.g. Database). ### 4.2. Blog Post Results We did a full reading of three categories of web pages, being web pages or blog posts in the categories non-corporate blog, corporate blog, definition and evangelism. This reading gave us an impression of some of the advantages, disadvantages and/or issues that practitioners see or have with multi-tenancy. We have translated the impression that we thus got into the following observations: **Observation 1: Different multi-tenancy levels** — Some practitioners make a distinction between multi-tenancy at the level of the *infrastructure* (multiple operating system instances on the same physical hardware), at the level of the *platform* (different applications and/or tenants on the same instance of the operation system) and at the *application* level (a single run-time stack is shared with multiple tenants). While not every blog post or website is perfectly clear on this, we observe that most websites on multi-tenancy are actually about the infrastructural or platform level application of multi-tenancy. **Observation 2: Cloud-based nature** — For many practitioners multi-tenancy is *evident* in a cloud-based setting (IBM, 2011). This points at two distinct issues with how multi-tenancy is perceived by practitioners. First, a cloud environment is — by its very purpose — a shared platform environment, which in turn indicates that multi-tenancy is seen by many as another way of saying *Platform as a Service* or PaaS. Indeed, in a PaaS setting, tenants can rent a piece of shared platform which can consist of an operating system and standard server applications like a web server, a database, etc. Secondly, in some cases, practitioners were also considering multi-tenancy at the level of software in a cloud-based setting. In this context, practitioners were considering that Software as a Service offerings can be offered more efficiently if the underlying platform is elastic. **Observation 3: Configurability of multi-tenant applications** — Configurability, or variability, of multi-tenant applications is seldomly mentioned. This raises two interesting points: - As discussed in Observation 1 this may hint at a greater awareness of multi-tenancy at the infrastructural or platform level, where configurability might not be so much of an issue - There is no apparent need for the configurability of multi-tenant software applications, which might indicate that most applications are actually *multi-user* applications or applications that share resources but that do not offer (advanced) forms of configurability. When customization is discussed, it is clear that customization should lead to a tailored experience for each tenant and that customization should be done by configuring application metadata. As such, configurability requires no programming. Another important point mentioned is that customizations for one client should not affect other clients. **Observation 4: Multi-tenant database** — A number of websites explicitly mention the database as being multi-tenant. In this situation different applications share a single database. When a single multi-tenant application is using the database, some web site authors express concern about data separation, i.e., making sure that tenants do not get access to another tenant’s data. 5. **Definition** A total of 43 different definitions was extracted from the academic literature with the aim of finding the best definition for use in the multi-tenancy domain, that describes the relevant elements, but also at all levels at which multi-tenancy is possible. **Identification:** The 43 definitions were identified by manually searching through papers for terms such as “we define multi-tenancy” or “multi-tenancy is defined as”. A common observation from these definitions is that these are typically poorly formulated and only applicable at one level of the software stack or infrastructure. An example: “A multi-tenant cloud system allows multiple users to share a common physical computing infrastructure in a cost-effective way” (Du et al., 2010). This definition is not generic, but refers specifically to a “system”. Its strong points are the “common physical computing infrastructure” and its emphasis on “costs”, one of the main drivers of multi-tenancy. Another definition is “Multi-tenancy allows a single application to emulate multiple application instances” (Azeez et al., 2010). This definition speaks specifically of an application, thereby excluding for instance hardware resources or databases. Word Frequency Analysis - An analysis of frequent occurrences of terms was performed to find the main concepts in multi-tenancy definitions. The results of this analysis can be found in Table 4. Obviously, relevant aspects of multi-tenancy are the fact that something (single) is being shared among multiple customers, that it takes place on several levels (system, service, application, database, and infrastructure), and that it changes traditional modes of service or software delivery. To clarify, we have conceptualized a system, such that we can reuse it for the definition later in Figure 2. The dotted boxes are parts of the system that are not influenced by software level multi-tenancy. Efforts exist to apply multi-tenancy at the middle-ware level (Strauch et al., 2013), but we did not explicitly analyse this, for the sake of creating a high level general definition. Checklist: A checklist containing five criteria was constructed for use in this research, in order to assess the quality of all definitions. The list is based on five principles discussed by Copi and Miller (1972). Furthermore, for each definition we attempted to establish whether it was abstract enough to play a part on all three levels (service, database, and infrastructure). The criteria were formulated as follows: - A definition must set out the essential attributes of the thing defined. - Definitions should avoid circularity. - The definition must not be too wide or too narrow. It must be applicable to everything to which the defined term applies (i.e. not miss anything out), and to nothing else (i.e. not include any things to which the defined term would not truly apply). - The definition must not be obscure. - A definition should not be negative where it can be positive. Several definitions were selected to establish a baseline for the multi-tenancy definition in this paper, based on the criteria mentioned above. First, the definition given by Rimal, Choi, and Lump is “multi-tenancy is when common resources and a single instance of both the object code of an application and the underlying database are used to support multiple customers simultaneously” (Rimal et al., 2009). The definition includes relevant aspects of multi-tenancy, such as “multiple customers” and “common resources” and it speaks of all three levels on which multi-tenancy can play a part (database, service, and hardware resources). However, the definition lacks a goal statement (what is the advantage of multi-tenancy?). Another definition is given by Guo et al.: “In a multi-tenant enabled service environment, user requests from different organizations and companies (tenants) are served concurrently by one or more hosted application instances based on the shared hardware and software infrastructure.” (Guo et al., 2007). This definition too addresses only two levels, but adds multiple instances of the software. Finally, an interesting definition is “Multi-tenancy aims to enable a service environment that user requests from different tenants are served concurrently by the least number of hosted service instances running on the shared hardware and software infrastructure” (Li et al., 2008) which focuses on reducing costs by sharing resources. Based on the definitions stated above we define multi-tenancy as follows: **Definition:** Multi-tenancy is a property of a system where multiple customers, so-called tenants, transparently share the system’s resources, such as services, applications, databases, or hardware, with the aim of lowering costs, while still being able to exclusively configure the system to the needs of the tenant. This definition caters to different needs. To begin with it mentions the most common terms used to identify multi-tenancy (with the sole exception of “instance”, but more on that later). Furthermore, it embraces any kind of system and its layers, from a complete service system with multiple instances (like Salesforce.com), to a simple hard drive that is shared among different end-users. Thirdly, it provides the main aim for applying multi-tenancy in a context, being the reduction of costs by sharing resources and achieving scalability. The words “single” and “instance” have been deliberately avoided, such that a qualifier can be used to determine whether we are speaking of single-instance or multiple-instance. The definition prescribes that when someone assigns the property multi-tenant, it is assigned to a system, service, database, or hardware resource, to clarify on what layer the multi-tenancy aspect applies. Although a small detail, it must be noted that multi-tenancy is written with a dash in 75% of the definitions. There are several clarifications that can be made with the definition at hand. First, the word “transparently” refers to the fact that it is generally unknown to customers and end-users that another customer or end-user is using the same resources, otherwise the definition would be applicable to any web application that is open to multiple users (Google.com, Facebook, etc.). A question that is frequently asked is what the differences are between multi-tenant, multi-user and multi-instance systems. The answer is that multi-instance systems do not necessarily need shared resources: a new system can be generated or deployed for each new user. Multi-tenant and multi-user systems, however, always share resources on one or more levels of the software stack. Multi-tenant systems share resources and allow only for mass-customization by using variability. Multi-user systems are only partly multi-tenant and offer the same invariable functionality to all customers. Please see Table 5 for an overview of these differences. <table> <thead> <tr> <th>Multi</th> <th>Shared resources</th> <th>Configurable at runtime</th> </tr> </thead> <tbody> <tr> <td>-tenant</td> <td>Yes</td> <td>Fully</td> </tr> <tr> <td>-user</td> <td>Yes</td> <td>Partly</td> </tr> <tr> <td>-instance</td> <td>Possibly</td> <td>Possibly</td> </tr> </tbody> </table> 6. Research Agenda In order to structure and guide future research in the area of multi-tenancy for both academics and practitioners, this section presents the major future research topics identified in current research on multi-tenancy. The “future work” sections of all final papers identified in the systematic mapping studies were analyzed to extract potential future research topics. For this search all sections named “future work”, “further work”, “discussion” and “conclusion” were included. Also, all papers were searched entirely, using the keyword “future”. First, all topics mentioned in the relevant sections were listed, after which synonyms and issues that were closely related were merged to overarching research themes. Classification and merging of the topics was performed by two researchers separately, after which the results were compared and discussed. This way, 23 issues were identified, which were categorized in four research themes. The analysis is based on the 48 papers that were collected in the structured mapping study. Every call for future work identified in the papers reflects a potentially strategic theme in the domain of multi-tenancy. Each of the themes below states the number of papers that address the theme and mention a specific call to action to researchers and practitioners. **Quality Assurance (6)** — Compliance to Service Level Agreements (SLAs), performance, monitoring, all are mentioned in the current body of multi-tenancy literature as important issues to address in future research. Most issues within this topic are similar to important research challenges in the domain of SaaS (Zhang et al., 2010). This can be explained by the fact that multi-tenant software is always hosted in a SaaS environment, causing challenges in this domain to influence the multi-tenancy domain as well. *Call:* An investigation into how customization of the multi-tenant application affects quality, e.g. in terms of performance. Can one general SLA be upheld, or should each tenant get a tenant-specific SLA? **Industry Validation (4)** — Some papers reported on multi-tenant prototypes created, but all were missing a real validation. Because of this, a high number of papers call for industrial application of multi-tenant solutions. Applying prototypes in real industrial settings and performing more multi- tenancy related case studies can greatly enhance the validity of multi-tenancy research and is therefore considered to be a major topic in future research. *Call:* With industrial multi-tenant solutions being developed right now, a next step for researchers is to work closely together with industry to validate research ideas on actual multi-tenant software systems. **Balancing & Placement (4)** — Although all customers in a multi- tenant environment theoretically are served from one instance of a software product, in practice, load balancing is needed between servers. This means identical servers are used to serve one software product in case this can no longer be done using one server. Specific tenants need to be placed on a specific server, but determining the best placement is a difficult task. *Call:* There might be opportunities to develop better load balancing algo- rithms that take into account the historical usage of the application by the different tenants. Specifically, the load balancing can be targeted at looking at the different time zones in which the tenants are operating. **Database (4)** — Four papers in the systematic mapping study explicitly mentioned database related issues as an important future research direction. Areas of interest include parallelism, locking, replication and partitioning. **Call:** A major point of concern that we noted in the blog posts is data isolation, i.e., making sure that the data of individual tenants is shielded for other tenants. As such, an investigation into how to isolate and partition the data is a logical next step. Additionally, developing tests to make sure that data isolation is working correctly is also an interesting avenue for future work. Three additional themes were identified, but were not sufficiently highlighted to count towards a valid collection of research themes. Although these themes were not emphasized by a sufficient number of authors, we mention them here briefly, to provide insight into other issues that are relevant. First, two papers mention the development of and research on multi-tenant platforms as an important next step in multi-tenancy research. The **development of a multi-tenant platform (2)** enables other researchers and developers to more easily deploy and test multi-tenant applications. Such a platform (ie. Salesforce (Fisher, 2007)) is likely to stimulate multi-tenancy research and development. The **call** in this context would be the need for an open platform available for multi-tenant applications. Researchers and industry should work together in designing, developing, and maintaining such a platform. Secondly, **security (2)** is a recurring theme in future work (Zhang et al., 2010), where papers specifically focus on the fact that different organizations, each having their own confidential data, are typically deployed on the same server and use the same instance of a software product. This increases the risk of data accidentally being queried by the wrong tenant. This leads to a **call** for increased attention to security in multi-tenant systems than it already does in multi-instance and multi-user systems. Finally, a theme that only occurs once in the literature that we surveyed, but poses a relevant challenge is **variability (1).** Since multi-tenant software is almost exclusively used in a setting in which multiple different organizations use the same instance of a software product, variability is an important research topic. Variability is the ability of a software product to offer different configurations to organizations hosted on one instance of a software product. The definition of multi-tenancy presented in this paper also mentions ‘varying customers’, inducing the need for variability (Kabbedijk and Jansen, 2012). The corresponding **call** is that there should be more awareness on the importance of variability in multi-tenant software. 7. Threats to Validity Since conducting a systematic mapping study is a largely manual task, most threats to validity relate to the possibility of researcher bias, and thus to the concern that other researchers might come to different results and conclusions. One remedy we adopted is to follow, where possible, guidelines on conducting systematic mapping studies as suggested by Budgen et al. (2008) and Petersen et al. (2008). The question of whether an article or blog post should be included in the mapping study is sometimes debatable. Following the advice of Kitchenham (2004), we enforced this criterion by utilizing predefined selection criteria that clearly define the scope (also see Section 2). A potential threat to the validity of the interpretation of the results is researcher bias in the selection and filtering of the articles and blog posts. Our countermeasures were (1) the systematic logging of all data related to the screening and filtering steps in a database accessible by all authors of the paper and (2) randomly selecting 10% of all papers after each selection or filtering step to determine the inter-rater agreement for that subset of papers. If a paper is rated differently by another author, the discrepancy was discussed. Finally, this research assessed results published up to 2012, so the landscape of multi-tenancy could have evolved slightly in the meantime. This is identified as a threat to validity. 8. Conclusion A total of 761 research papers and 371 industrial blogs on multi-tenancy have been analyzed in order to get a complete overview of the multi-tenancy domain. The results show that most papers propose a solution related to multi-tenancy, but almost no papers report on industrial experiences while implementing multi-tenancy, providing some insight into the maturity of the domain. The blog analysis shows multi-tenancy is a popular topic and most blogs are written by individuals instead of corporations. Based on the research results a comprehensive definition for multi-tenancy is proposed (SubRQ1), positioning multi-tenancy as an architectural principle of a system where multiple varying customers and their end-users transparently share the systems services, applications, databases, or hardware resources, with the aim of lowering costs. We call for this definition to be used in future research on multi-tenancy to further structure results and communication. No clear difference on the interpretation of multi-tenancy between academia and industry was observed, but we did see a significant difference among academia and industry (SubRQ2). For future research we listed 4 themes (SubRQ3), meant for the guidance of future research and providing a roadmap within the domain of multi-tenancy. The main research question (RQ) is answered by the complete drawing of the current multi-tenancy domain from both the academic and industrial perspective, together with the directions for steering the domain from this point on. References URL http://goo.gl/x3yybz URL http://goo.gl/0gn8xV Appendix A. Systematic Mapping Study Paper List Here a complete list of all final papers identified within the systematic mapping study is presented in alphabetical order of first author. Li, X. H., Liu, T. C., Li, Y., Chen, Y., 2008. Spin: Service performance isolation infras- 7th International Conference on. IEEE, pp. 479–483.
{"Source-Url": "https://pure.tudelft.nl/portal/files/47060994/kabbedijkJSS_2.pdf", "len_cl100k_base": 10318, "olmocr-version": "0.1.50", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 59135, "total-output-tokens": 16076, "length": "2e13", "weborganizer": {"__label__adult": 0.0003981590270996094, "__label__art_design": 0.0013980865478515625, "__label__crime_law": 0.0005097389221191406, "__label__education_jobs": 0.008392333984375, "__label__entertainment": 0.00023698806762695312, "__label__fashion_beauty": 0.0002386569976806641, "__label__finance_business": 0.003658294677734375, "__label__food_dining": 0.00042057037353515625, "__label__games": 0.0010519027709960938, "__label__hardware": 0.0011653900146484375, "__label__health": 0.0004925727844238281, "__label__history": 0.0006265640258789062, "__label__home_hobbies": 0.00020754337310791016, "__label__industrial": 0.0007719993591308594, "__label__literature": 0.0013837814331054688, "__label__politics": 0.00037550926208496094, "__label__religion": 0.0005125999450683594, "__label__science_tech": 0.126708984375, "__label__social_life": 0.00021791458129882812, "__label__software": 0.065185546875, "__label__software_dev": 0.78515625, "__label__sports_fitness": 0.00017344951629638672, "__label__transportation": 0.0004737377166748047, "__label__travel": 0.00020253658294677737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64948, 0.03143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64948, 0.35836]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64948, 0.89532]], "google_gemma-3-12b-it_contains_pii": [[0, 1158, false], [1158, 2481, null], [2481, 5185, null], [5185, 7804, null], [7804, 9839, null], [9839, 12127, null], [12127, 14273, null], [14273, 16502, null], [16502, 18311, null], [18311, 20820, null], [20820, 24075, null], [24075, 26396, null], [26396, 29044, null], [29044, 31579, null], [31579, 33806, null], [33806, 35498, null], [35498, 36448, null], [36448, 39056, null], [39056, 41278, null], [41278, 43821, null], [43821, 46481, null], [46481, 48972, null], [48972, 50918, null], [50918, 52646, null], [52646, 54550, null], [54550, 56311, null], [56311, 59474, null], [59474, 62546, null], [62546, 64948, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1158, true], [1158, 2481, null], [2481, 5185, null], [5185, 7804, null], [7804, 9839, null], [9839, 12127, null], [12127, 14273, null], [14273, 16502, null], [16502, 18311, null], [18311, 20820, null], [20820, 24075, null], [24075, 26396, null], [26396, 29044, null], [29044, 31579, null], [31579, 33806, null], [33806, 35498, null], [35498, 36448, null], [36448, 39056, null], [39056, 41278, null], [41278, 43821, null], [43821, 46481, null], [46481, 48972, null], [48972, 50918, null], [50918, 52646, null], [52646, 54550, null], [54550, 56311, null], [56311, 59474, null], [59474, 62546, null], [62546, 64948, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64948, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64948, null]], "pdf_page_numbers": [[0, 1158, 1], [1158, 2481, 2], [2481, 5185, 3], [5185, 7804, 4], [7804, 9839, 5], [9839, 12127, 6], [12127, 14273, 7], [14273, 16502, 8], [16502, 18311, 9], [18311, 20820, 10], [20820, 24075, 11], [24075, 26396, 12], [26396, 29044, 13], [29044, 31579, 14], [31579, 33806, 15], [33806, 35498, 16], [35498, 36448, 17], [36448, 39056, 18], [39056, 41278, 19], [41278, 43821, 20], [43821, 46481, 21], [46481, 48972, 22], [48972, 50918, 23], [50918, 52646, 24], [52646, 54550, 25], [54550, 56311, 26], [56311, 59474, 27], [59474, 62546, 28], [62546, 64948, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64948, 0.12838]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
16ad3c3316583de371ccc7c024b39405a2de6423
Introduction Retrieval requests may take one of two forms. - **Retrieving a Specified Video:** User specifies the video he wants to see, e.g. “Show me *The Sound of Music*”. - **Identifying and Retrieving Video Segments:** User might express a query such as *Find all videos in which John Wayne appears with a gun*. This query requires that we: - identify the movies in which John Wayne appears with a gun and - identify the segments within those movies in which John Wayne appears with a gun. - Once we can organize the content of a single video, we can organize the content of a set of videos. Organizing Content of a Single Video We must ask ourselves the following questions: 1. *Which* aspects of the video are likely to be of interest to the users who access the video archive? 2. *How* can these aspects of the video be stored efficiently, so as to minimize the time needed to answer user queries? 3. What should *Query Languages* for video data look like and how should the relational model of data be extended to handle video information? 4. Can the *Content Extraction* process be automated, and if so, how can the reliability of such content extraction techniques be taken into account when processing queries? Video Content: Which Aspects of a Video To Store? Example: An 8-hour, one day lecture of a short course given by a professor on the topic *Multimedia Databases*. In this case, the video contains a set of “items of interest.” These items of interest could include: 1. *People* such as the professor, any guest lecturer (or lecturers) who speak at selected times in the course, and any students who might ask questions and/or distinguish themselves in other ways; For instance, Prof. Felix might be one such person, while Erica might be a student. 2. *Activities* that occur in the class such as *lecturing* (on a particular topic, by a particular individual), or *questioning* (by a particular student), or *answering* a question posed by a particular student. Other activities could involve general group discussions, and/or coffee breaks. In addition, activities have attributes, e.g. *lecturing*(quadtrees, Prof. Felix) indicates an activity involving Prof. Felix lecturing on quadtrees, and *questioning*(Erica, Prof. Felix) indicating that Prof. Felix was questioned by Erica. Movie Example - Consider the movie, *Sound of Music*. - Items of interest include: 1. *People* such as Maria, Count Von Trapp, and others; 2. *Inanimate objects* such as the piano in Count Von Trapp’s house; 3. *Animate objects* such as the ducks and birds in the pond; 4. *Activities* such as singing and dancing, with their associated list of attributes. For example, the activity *singing* may have two attributes: (a) *Singer* specifying which person is singing and (b) *Song* specifying the name of the song and - Certain common characteristics occur. Given any frame $f$ in the video, the frame $f$ has a set of associated objects and associated activities. - Objects/activities have certain properties, and these properties may vary from one frame to another. - **Creating a video database means we should be able to index all these associations.** Properties • **Property:** Consists of a pair \((\text{pname}, \text{Values})\) where: – \text{pname} is the *Name* of the property, – \text{Values} is a set. • **Property Instance:** An expression of the form \(\text{pname} = v\) where \(v \in \text{Values}\). • Example properties: 1. \((\text{height}, \mathbb{R}^+)\) consists of the “height” property with real-values; 2. \((\text{primarycolors}, \{\text{red, green, blue}\})\) consists of a property called \text{primarycolors} with values red, green, blue. Object Scheme - **Object Scheme**: A pair \((fd, fi)\) where: 1. \(fd\) is a set of *frame-dependent* properties; 2. \(fi\) is a set of *frame-independent* properties. 3. \(fi\) and \(fd\) are disjoint sets. \(fd\) and \(fi\) are disjoint. - If \((pname, Values)\) is a property in \(fd\), then this means that the property named \(pname\) may assume different instances, depending upon the video frame being considered. E.g. the property *shirtcolor* varies from frame to frame. - **Object Instance**: An *Object Instance* is a triple \((oid, os, ip)\) where: 1. \(oid\) is a string called the object-id and 2. \(os = (fd, fi)\) is an object structure and 3. \(ip\) is a set of statements such that: (a) for each property \((pname, Values)\) in \(fi\), \(ip\) contains *at most* one property instance of \((pname, Values)\) and; (b) for each property \((pname, Values)\) in \(fd\), and each frame \(f\) of the video, \(ip\) contains *at most* one property instance of \((pname, Values)\) – this property instance is denoted by the expression \(pname = v \text{ IN } f\). Example - Surveillance video of 5 frames. - Show surveillance video of the house of Denis Dopeman. - Frame 1: We see Jane Shady at the path leading to Mr. Dopeman’s door. She is carrying a briefcase. - Frame 2: She is halfway on the path to the door. Door opens. Mr. Dopeman appears at the door. - Frame 3: Jane Shady and Denis Dopeman are ext to each other at the door; Jane Shady is still carrying the briefcase. - Frame 4: Jane Shady is walking back, and Denis Dopeman has the brief case. - Frame 5: Jane Shady is at the beginning of the path to Denis Dopeman’s door. Door is shut. Example, contd. Frame-dependent properties <table> <thead> <tr> <th>Frame</th> <th>Objects</th> <th>Frame-dependent properties</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Jane Shady</td> <td>has(briefcase), at(path_front)</td> </tr> <tr> <td></td> <td>dopeman_house</td> <td>door(closed)</td> </tr> <tr> <td></td> <td>Briefcase</td> <td></td> </tr> <tr> <td>2</td> <td>Jane Shady</td> <td>has(briefcase), at(path_middle)</td> </tr> <tr> <td></td> <td>Denis Dopeman</td> <td>at(door)</td> </tr> <tr> <td></td> <td>dopeman_house</td> <td>door(open)</td> </tr> <tr> <td></td> <td>Briefcase</td> <td></td> </tr> <tr> <td>3</td> <td>Jane Shady</td> <td>has(briefcase), at(door)</td> </tr> <tr> <td></td> <td>Denis Dopeman</td> <td>at(door)</td> </tr> <tr> <td></td> <td>dopeman_house</td> <td>door(open)</td> </tr> <tr> <td></td> <td>Briefcase</td> <td></td> </tr> <tr> <td>4</td> <td>Jane Shady</td> <td>at(door)</td> </tr> <tr> <td></td> <td>Denis Dopeman</td> <td>has(briefcase), at(door)</td> </tr> <tr> <td></td> <td>dopeman_house</td> <td>door(open)</td> </tr> <tr> <td></td> <td>Briefcase</td> <td></td> </tr> <tr> <td>5</td> <td>Jane Shady</td> <td>at(path_middle)</td> </tr> <tr> <td></td> <td>dopeman_house</td> <td>door(closed)</td> </tr> <tr> <td></td> <td>Briefcase</td> <td></td> </tr> </tbody> </table> ## Frame independent properties <table> <thead> <tr> <th>Object</th> <th>Frame-independent property</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Jane Shady</td> <td>age</td> <td>35</td> </tr> <tr> <td></td> <td>height</td> <td>170 (cms)</td> </tr> <tr> <td>dopeman house</td> <td>address</td> <td>6717 Pimmit Drive Falls Church, VA 22047.</td> </tr> <tr> <td></td> <td>type</td> <td>brick</td> </tr> <tr> <td></td> <td>color</td> <td>brown</td> </tr> <tr> <td>Denis Dopeman</td> <td>age</td> <td>56</td> </tr> <tr> <td></td> <td>height</td> <td>186</td> </tr> <tr> <td>briefcase</td> <td>color</td> <td>black</td> </tr> <tr> <td></td> <td>length</td> <td>40 (cms)</td> </tr> <tr> <td></td> <td>width</td> <td>31 (cms)</td> </tr> </tbody> </table> Activity Schema - An Activity Scheme, \( \text{ACT} \_ \text{SCH} \), is a finite set of properties such that if \(( \text{pname}, \text{Values}_1 \)\) and \(( \text{pname}, \text{Values}_2 \)\) are both in \( \text{ACT} \_ \text{SCH} \), then \( \text{Values}_1 = \text{Values}_2 \). - Example: Consider the activity ExchangeObject such as the exchange of objects between Jane Shady and Denis Dopeman. His activity has the three-pair scheme consisting of the pairs: 1. \((\text{Giver}, \text{Person})\): This pair specifies that the activity ExchangeObject has a property called \text{Giver} specifying who is transferring the object in question. This says that the property \text{Giver} is of type \text{Person}. \text{Person} is the set of all persons. 2. \((\text{Receiver}, \text{Person})\) This pair specifies that the activity ExchangeObject has a property called \text{Receiver} specifying who is receiving the object in question. 3. \((\text{Item}, \text{Thing})\): This pair specifies the item being exchanged. \text{Thing} is the set of all “exchange-able” items. Thus, the exchange of the briefcase that occurred between Jane Shady and Denis Dopeman can be captured as an activity scheme with \text{Giver} = Jane Shady, \text{Receiver} = Denis Dopeman, and \text{Item} = briefcase. Activity/Event - An Activity is a pair: 1. AcID: the “name” of the activity of scheme ACT_SCH and 2. for each pair (pname, Values) ∈ ACT_SCH, an equation of the form pname = v where v ∈ Values. - Any activity has an associated activity scheme, and each property of the activity has an associated value from its set of possible values. - Example: 1. The activity Lecturing may have the scheme \[(\text{Lecturer, Person}), (\text{Topic, String})\]\n and may contain the equations: \[\text{Lecturer} = \text{Prof. Felix.} \] \[\text{Topic} = \text{Video Databases.}\] 2. Likewise, the activity Questioning may have the scheme \[(\text{Questioner, Person}), (\text{Questionee, Person}), (\text{Question, String}), (\text{Answer, String})\] and may contain the equations: \[\text{Questioner} = \text{Erica.} \] \[\text{Questionee} = \text{Prof. Felix.} \] \[\text{Question} = \text{How many children does a quadtree node have?} \] \[\text{Answer} = \text{At most 4.}\] Video Content • Suppose \( v \) is a video. • Let \( \text{framenum}(v) \) specify the total number of frames of video \( v \). • The content of \( v \) consists of a triple \((\text{OBJ}, \text{AC}, \lambda)\) where: 1. \( \text{OBJ} = \{\text{oid}_1, \ldots, \text{oid}_n\} \) is a finite set of object instances; 2. \( \text{AC} = \{\text{AclD}_1, \ldots, \text{AclD}_k\} \) is a finite set of activities/events and 3. \( \lambda \) is a map from \( \{1, \ldots, \text{framenum}(v)\} \) to \( 2^{\text{OBJ} \cup \text{AC}} \). • Intuitively, 1. \( \text{OBJ} \) represents the set of objects of interest in the video and 2. \( \text{AC} \) represents the set of activities of interest in the video and 3. \( \lambda \) tells us which objects and which activities are associated with any given frame \( f \) of the video. • Though this definition assumes that \( \lambda \) will be specified on a frame by frame basis, this is not required, as we will see later. Video Library - A *video library*, VidLib, consists of a finite set of 5-tuples \((\text{Vid}_\text{Id}, \text{VidContent}, \text{framenum}, \text{plm}, \mathcal{R})\) where: 1. \(\text{Vid}_\text{Id}\) is the *Name* of the video and 2. \(\text{VidContent}\) is the *Content* of the video and 3. \(\text{framenum}\) is the number of frames in the video and 4. \(\text{plm}\) is a *placement mapping* that specifies the address of different parts of the video. 5. \(\mathcal{R}\) is a set of relations about videos “as a whole”. ### Organization of a simple video library <table> <thead> <tr> <th>Vid Content</th> <th>Vid_Id</th> <th>framenum</th> <th>Relations</th> <th>plm</th> </tr> </thead> <tbody> <tr> <td>vid1.mpg</td> <td>9999</td> <td>date, place</td> <td></td> <td></td> </tr> <tr> <td>vid2.mpg</td> <td>4000</td> <td></td> <td></td> <td></td> </tr> <tr> <td>vid3.mpg</td> <td>16000</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Query Languages for Video Data Querying video involves the following types of queries. - **Segment Retrievals**: Find all segments, from one or more videos in the library, that satisfy a given condition. - **Object Retrievals**: Given a video $v$ and a segment $[s, e]$ (start frame through end frame) of the video, find all objects that occurred in: - all frames between $s$ and $e$ (inclusive), - some frame between $s$ and $e$ (inclusive). - **Activity Retrievals**: Given a video $v$ and a segment $[s, e]$ (start frame through end frame) of the video, find all activities occurred in: - all frames between $s$ and $e$ (inclusive), - some frame between $s$ and $e$ (inclusive). - **Property-based Retrievals**: Find all videos, and video segments in which objects/activities with certain properties occur. Video Functions • **FindVideoWithObject**(o): Given the name of a data object o, this function returns as output, a set of triples of the form: (VideoId, Startframe, EndFrame) such that if \((v, s, e)\) is a triple returned in the output, then video \(v\)'s segment starting at frame \(s\) and ending at frame \(e\) has the object \(o\) in all frames between and including \(s, e\). • **FindVideoWithActivity**(a): This does exactly the same as above, except that it returns all triples \((v, s, e)\) such that video \(v\)'s segment starting at frame \(s\) and ending at frame \(e\) has the activity \(a\) in it. For each property \(p\), the notation \(a.p\) specifies the value of that property. • **FindVideoWithActivityandProp**(a,p,z): This does exactly the same as above, except that it returns all triples \((v, s, e)\) such that video \(v\)'s segment starting at frame \(s\) and ending at frame \(e\) has the activity \(a\) in it with \(z\) as the value of property \(p\). • **FindVideoWithObjectandProp**(o,p,z): This does exactly the same as above, except that it returns all triples \((v, s, e)\) such that video \(v\)'s segment starting at frame \(s\) and ending at frame \(e\) has the object \(o\) in it with \(z\) as the value of property \(p\). - FindObjectsInVideo(v,s,e): Given the name of a video, and a start and end frame, this returns all objects that appear in all segments of the video between s and e (inclusive). - FindActivitiesInVideo(v,s,e): Identical to the above, except it applies to activities, not objects. - FindActivitiesAndPropsInVideo(v,s,e): Given the name of a video, a start and end frame, this returns a set of records of the form \[ \text{activityname: prop1 = entity1; prop2 = entity2; \ldots; propk = entityk} \] comprising all activities, and their associated roles, that occur in all times between s and e of video v. - FindObjectsAndPropsInVideo(v,s,e): Identical to the above, except that it applies to objects, not to activities. Video Query Languages - Standard SQL query has the form: ``` SELECT field1,...,fieldn FROM relation1 (R1), relation2 (R2), ..., relationk (Rk) WHERE Condition. ``` - Expand this so that: 1. The **SELECT** statement may contain entries of the form \[ \text{VidId} : [s, e] \] denoting the selection of a video with id, \text{VidId}, and with the relevant segment comprised of frames between \( s \) and \( e \) inclusive. 2. The **FROM** statement may contain entries of the form: \[ \text{video(source)}(V) \] which says that \( V \) is a variable ranging over videos from the source named. 3. The **WHERE** condition allows statements of the form \[ \text{term IN func-call} \] where: (a) *term* is either a variable or an object or an activity, or a property value and (b) *func_call* is any of the eight video functions listed above. Examples - “Find all videos and their relevant segments from video library VidLib$_1$ that contain Denis Dopeman.” SELECT vid:[s,e] FROM video:VidLib$_1$ WHERE (vid,s,e) IN FindVideoWithObject(Denis Dopeman). - “Find all videos and their relevant segments from video library VidLib$_1$ that contain Denis Dopeman and Jane Shady.” SELECT vid:[s,e] FROM video:VidLib$_1$ WHERE (vid,s,e) IN FindVideoWithObject(Denis Dopeman) AND (vid,s,e) IN FindVideoWithObject(Jane Shady). Examples, Continued • “Find all videos and their relevant segments from video library VidLib\textsubscript{1} that contain Denis Dopeman and Jane Shady exchanging a briefcase.” SELECT vid:[s,e] FROM video:VidLib\textsubscript{1} WHERE (vid, s, e) \textbf{IN} FindVideoWithObject(Denis Dopeman) AND (vid, s, e) \textbf{IN} FindVideoWithObject(Jane Shady) AND (vid, s, e) \textbf{IN} FindVideoWithActivityandProp (ExchangeObject, Item, Briefcase) AND (vid, s, e) \textbf{IN} FindVideoWithActivityandProp (ExchangeObject, Giver, Jane Shady) AND (vid, s, e) \textbf{IN} FindVideoWithActivityandProp (ExchangeObject, Receiver, Denis Dopeman) Indexing Video Content - Now that we have defined content, we need to index it. - We have 8 types of video retrieval functions. Indexing must support efficient execution of these 8 function types. - It is impossible to store video content on a frame by frame basis due to the fact that a single 90 minute video contains close to ten million frames. - We need Compact Representations to store video content. - Two such data structures: - Frame Segment Tree - R-Segment Tree Frame Segment Trees - **Frame-sequence** is a pair $[i, j)$ where $1 \leq i, j \leq n$. $[i, j)$ represents the set of all frames between $i$ (inclusive) and $j$ (non-inclusive), i.e. $$[i, j) = \{k \mid i \leq k < j\}.$$ - **EX:** $[6, 12)$ denotes the set of frames $\{6, 7, 8, 9, 10, 11\}$. - **Frame-sequence Ordering:** $[i_1, j_1) \subseteq [i_2, j_2)$ iff $i_1 < j_1 \leq i_2 < j_2$. - $[i_1, j_1) \subseteq [i_2, j_2)$ means that the sequence of frames denoted by $[i_1, j_1)$ precedes the sequence of frames denoted by $[i_2, j_2)$. - **EX:** Consider frame-sequences $fs_1 = [10, 15)$, $fs_2 = [8, 10)$ and $fs_3 = [11, 13)$. - $fs_2 \subseteq fs_1$ - $fs_2 \subseteq fs_3$ - $fs_1 \not\subseteq fs_3$. - **Well-Ordered Set of Frame-sequences:** A set, $X$, of frame-sequences is said to be well-ordered iff: 1. $X$ is finite, i.e. $X = \{[i_1, j_1), \ldots, [i_r, j_r)\}$ for some integer $r$, and 2. $[i_1, j_1) \subseteq [i_2, j_2) \subseteq \ldots \subseteq [i_r, j_r)$. - **EX:** $X = \{[1, 4), [9, 13), [33, 90)\}$ is a well-ordered set of frame-sequences because $[1, 4) \subseteq [9, 13) \subseteq [33, 90)$. Frame Segment Trees, Continued - **Solid Set of Frame-sequences:** A set, \( X \), of frame-sequences is said to be *solid* iff 1. \( X \) is well-ordered, and 2. there is no pair of frame-sequences in \( X \) of the form \([i_1, i_2]\) and \([i_2, i_3]\). - Take \( X = \{[1, 5], [5, 7], [9, 11]\}\). - \( X \) is not solid. Why? - Take \( Y = \{[1, 7], [9, 11]\}\). This is solid. - **Segment Association Map:** Suppose (OBJ, AC, \( \lambda \)) represents the content of a video \( v \). A *Segment Association Map* \( \sigma_v \) associated with video \( v \) is the map defined as follows: 1. \( \sigma_v \)'s domain is OBJ \( \cup \) AC and 2. \( \sigma_v \) returns, for each \( x \in \text{OBJ} \cup \text{AC} \), a *solid* set of frame-sequence, denoted \( \sigma_v(x) \) such that: - (a) if \([s, e) \in \sigma_v(x)\), then for all \( s \leq f < e \), it is the case that \( x \in \lambda(f) \) and - (b) for all frames \( f \) and all \( x \in \text{OBJ} \cup \text{AC} \), if \( x \in \lambda(f) \), then there exists a frame-sequence \([s, e) \in \sigma_v(x)\) such that \( f \in [s, e) \). An example of a video’s content Example, continued - 5000 frames in example. - The table below shows how many frames each object appears in: <table> <thead> <tr> <th>Object</th> <th>Number of frames</th> </tr> </thead> <tbody> <tr> <td>object1</td> <td>1250</td> </tr> <tr> <td>object2</td> <td>1500</td> </tr> <tr> <td>object3</td> <td>3250</td> </tr> <tr> <td>object4</td> <td>1000</td> </tr> <tr> <td>object5</td> <td>2750</td> </tr> </tbody> </table> - To explicitly represent the mapping $\lambda$ associated with the content of this video, we would need to have a total of 9750 tuples. - Instead, represent information with 16 tuples as shown below. Segment Table: <table> <thead> <tr> <th>Object</th> <th>Segment</th> </tr> </thead> <tbody> <tr> <td>object1</td> <td>250–750</td> </tr> <tr> <td>object1</td> <td>1750–2500</td> </tr> <tr> <td>object2</td> <td>250–1000</td> </tr> <tr> <td>object2</td> <td>2250–2500</td> </tr> <tr> <td>object2</td> <td>2750–3250</td> </tr> <tr> <td>object3</td> <td>0–250</td> </tr> <tr> <td>object3</td> <td>500–750</td> </tr> <tr> <td>object3</td> <td>1000–1750</td> </tr> <tr> <td>object3</td> <td>2500–2750</td> </tr> <tr> <td>object3</td> <td>3250–5000</td> </tr> <tr> <td>object4</td> <td>1500–2250</td> </tr> <tr> <td>object4</td> <td>4500–5000</td> </tr> <tr> <td>object5</td> <td>250–750</td> </tr> <tr> <td>object5</td> <td>1250–2750</td> </tr> <tr> <td>object5</td> <td>3500–3750</td> </tr> <tr> <td>object5</td> <td>4500–5000</td> </tr> </tbody> </table> Frame-segment tree structure - Suppose there are \( n \) objects \( o_1, \ldots, o_n \) in our video \( v \) and \( m \) activities \( a_1, \ldots, a_m \). - Then we have a total of: \[ \sum_{i=1}^{n} (\text{card}(\sigma_v(o_i))) + \sum_{j=1}^{m} (\text{card}(\sigma_v(a_j))) \] entries in the table just for one single video. - \text{FS-trees use the following components:} - \textbf{OBJECTARRAY:} specifies, for each object, an \textit{ordered linked list} of pointers to nodes in the frame segment tree specifying which segments the object appears in. - \textbf{ACTIVITYARRAY:} specifies, for each activity, an \textit{ordered linked list} of pointers to nodes in the frame segment tree specifying which segments the activity occurs in. - The FS-tree is now constructed from the segment table. Frame-segment trees, continued - **Step 1**: Let \([s_1, e_1], \ldots, [s_w, e_w]\) be all the intervals in the “Segment” column of the segment table. Let \[ q_1, \ldots, q_z \] be an enumeration, in ascending order, of all members of \(\{s_i, e_i \mid 1 \leq i \leq w\}\) with duplicates eliminated. If \(z\) is not an exponent of 2, then do as follows: let \(r\) be the smallest integer such that \(z < 2^r\) and \(2^r > \text{framenum}(v)\). Add new elements \(q_{z+1}, \ldots, q_{2^r}\) such that \(q_{2^r} = \text{framenum}(v) + 1\) and \(q_{z+j} = q_z + j\) (for \(z + j < 2^r\)). *By virtue of the above argument, we may proceed under the assumption that \(z\) is an exponent of 2, i.e. \(z = 2^r\) for some \(r\).* - **Step 2**: The frame-segment tree is a binary tree constructed as follows. 1. Each node in the frame segment tree represents a *frame-sequence* \([x, y]\) starting at frame \(x\) and including all frames up to, but not including, frame \(y\). 2. Every leaf is at level \(r\). The leftmost leaf denotes the interval \([z_1, z_2]\), the second from left-most represents the interval \([z_2, z_3]\), the third from left-most represents the interval \([z_3, z_4]\) and so on. If \(N\) is a node with two children representing the intervals \([p_1, p_2), [p_2, p_3)\), then \(N\) represents the interval \([p_1, p_3)\). Thus, the root of the segment tree represents the interval \([q_1, q_z)\) if \(q_z\) is an exponent of 2; otherwise it represents the interval \([q_1, \infty)\). 3. The number inside each node may be viewed as the address of that node. 4. The set of numbers placed next to a node denotes the identifiers of video objects and activities that appear in the entire frame-sequence associated with that node. Thus, for example, if a node \(N\) represents the frame sequence \([i, j)\) and object \(o\) occurs in all frames in \([i, j)\), then object \(o\) labels node \(N\) (unless object \(o\) labels an ancestor of node \(N\) in the tree). Example object 5 object 4 object 3 object 2 object 1 1000 2000 3000 4000 5000 Example, continued Example, continued Example, continued <table> <thead> <tr> <th>LB=250</th> <th>OBJ</th> <th>UB=500</th> <th>J</th> </tr> </thead> <tbody> <tr> <td>LLINK</td> <td>RLINK</td> <td></td> <td></td> </tr> </tbody> </table> locations in OBJECTARRAY Example, continued nodes in frame-segment tree 17 18 23 24 Video Library - Suppose our video library, VidLib, contains videos $v_1, \ldots, v_n$. - Create a table called INTOBJECTARRAY having the scheme $(VID_ID, OBJ, PTR)$. - Tuple $(v, o, ptr)$ is in INTOBJECTARRAY iff the pair $(o, ptr)$ is in the OBJECTARRAY associated with video $v$. - Create a table called INTACTIVITYARRAY having the scheme $(VID_ID, ACT, PTR)$. - Tuple $(v, a, ptr)$ is in INTOBJECTARRAY iff the pair $(a, ptr)$ is in the ACTIVITYARRAY associated with video $v$. - For each $v_i$, a frame segment tree, $\text{fst}(v_i)$ is associated with video $v_i$. - Only difference from before is that pointers from the frame segment tree point to locations in INTOBJECTARRAY and INTACTIVITYARRAY rather than to OBJECTARRAY and ACTIVITYARRAY as described earlier. Implementing Video Operations - **FindVideoWithObject(o):** ```sql SELECT VIDEO_ID FROM INTOBJECTARRAY WHERE OBJ = o. ``` - **FindVideoWithActivityandProp(a,p,z):** ```sql SELECT VIDEO_ID FROM INTOBJECTARRAY t WHERE OBJ = o AND t.p = z. ``` - **FindObjectsInVideo(v,s,e):** ``` Algorithm 4 FindObjectsInVideo(R,s,e) S = NIL; (* no objects found so far *) if R = NIL then { Return S; Halt } else { if \[R.LB, R.UB\] \subseteq \[s, e\] then S = append(S, preorder(R)) else { if \[R.LB, R.UB\] \cap \[s, e\] \neq \emptyset then { S = append(S, R.obj); S = append(S, FindObjectsInVideo(R.LINK, s, e)); S = append(S, FindObjectsInVideo(R.RLINK, s, e)); } } return(S); end ``` RS-Trees - Very similar to the frame segment tree. - The concepts of `OBJECTARRAY` and `ACTIVITYARRAY` remain the same as before. - Instead of using a segment tree to represent the frame sequences (such as those shown in Figure ??), we take advantage of the fact that a sequence \([s, e]\) is a rectangle of length \((e - s)\) and of width 0. - We already know how to represent a set of rectangles using an R-tree. - Example: Video Segmentation - We have assumed a *logical delineation* of video data – video is broken up into homogeneous segments. - Usually, a video is created by taking a set of *shots*. - These shots are then composed together using specified *composition operators*. - Shots are usually taken with a fixed set of cameras each of which has a constant relative velocity. - A *shot composition* operator, often referred to as an *edit effect* is an operation that takes as input, two shots, $S_1, S_2$, and a duration, $t$, and *merges* the two shots into a composite shot in within time $t$. - Thus, for example, suppose we wish to compose together, two shots $S_1, S_2$ and suppose these two shots have durations $t_1, t_2$ respectively. If $f$ is a shot composition operator, then $$ f(S_1, S_2, t) $$ creates a segment of video of length $(t_1 + t_2 + t)$. $S_1$ is first shown and then undergoes a continuous transformation over a time interval $t$ leading to the presentation of $S_2$ next. - $f(S_1, S_2, t)$ then is a continuous sequence of video. - In general, a video as a whole may be represented as: $$ (f_n(\ldots f_2(f_1(S_1, S_2, t_1), S_3, t_2)\ldots, S_n, t_n)). $$ Shot Composition Operators - **Shot Concatenation**: Concatenates the two shots (even if the transition is not smooth). If \texttt{shotcat} is a shot concatenation operator, then \( t \) must be zero, i.e. whenever we invoke \texttt{shotcat}(\( S_1, S_2, t \)), the third argument \( t \) must be set to 0. - **Spatial Composition**: Examples include: a \textit{translate} operation which causes two successive shots to be overlaid one on top of the other. For instance, suppose we want to show shot \( S_1 \) first, followed by shot \( S_2 \). This is done by first overlaying shots \( S_1 \) \textit{on top} of shot \( S_2 \) and then moving (i.e. translating) shot \( S_1 \) away, thus exposing shot \( S_2 \). - **Chromatic Composition**: \textit{fades} and \textit{dissolves}. Both these operations are chromatic scaling operations that try to continuously transform each pixel \((x, y)\) in the first shot into the corresponding pixel in the second shot. Video Segmentation Problem - Given a video $V$, express the video $V$ in the form: \[ V = f_n(\ldots f_2(f_1(S_1, S_2, t_1), S_3, t_2)\ldots, S_n, t_n). \] - That is, given video $V$, find $n$ and shots $S_1, \ldots, S_n$, times $t_1, \ldots, t_n$, and composition operations $f_1, \ldots, f_n$ such that the above equation holds. Video Standards - All video compression standards attempt to compress videos by performing an *intra-frame* analysis. - Each frame is divided up into blocks. Compare different frames to see which data is “redundant” in the two frames. - Drop redundant data to compress. - Compression quality is measured by: - the fidelity of the color map – how many colors of the original video occur when the compressed video is decompressed? - the pixel resolution per frame – how many pixels per frame of the video have been dropped? - the number of frames per second – how many frames have been dropped? - Compression Standards: MPEG-1,2,3, Cinepak, JPEG video etc. MPEG-1 - Stores videos as a sequence of $I$, $P$ and $B$ frames. - $I$ frames are independent images called "intra frames". Basically a still image. - $P$-frame is computed from the closest $I$-frame preceding it by interpolation (using DCT). - $B$-frames are computed by interpolating from the two closest $P$ or $I$ frames. MPEG-1, Continued Other Compression Standards - MPEG-2 uses higher pixel resolution and a higher data rate, thus making it superior to MPEG-1 in terms of the quality of the video as seen by the user. However, it requires higher bandwidth thus making it feasible for some, but not all applications. - MPEG-3 supports even higher sampling rates and frames per second that MPEG-2.
{"Source-Url": "https://cis.temple.edu/~vasilis/Courses/CIS750/Slides/ch7.pdf", "len_cl100k_base": 9121, "olmocr-version": "0.1.49", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 150712, "total-output-tokens": 10678, "length": "2e13", "weborganizer": {"__label__adult": 0.0004487037658691406, "__label__art_design": 0.0014219284057617188, "__label__crime_law": 0.00037980079650878906, "__label__education_jobs": 0.00644683837890625, "__label__entertainment": 0.0005288124084472656, "__label__fashion_beauty": 0.0002130270004272461, "__label__finance_business": 0.00044083595275878906, "__label__food_dining": 0.00051116943359375, "__label__games": 0.0008959770202636719, "__label__hardware": 0.0027294158935546875, "__label__health": 0.00035881996154785156, "__label__history": 0.0003876686096191406, "__label__home_hobbies": 0.0002262592315673828, "__label__industrial": 0.0005712509155273438, "__label__literature": 0.0006265640258789062, "__label__politics": 0.00025177001953125, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.08209228515625, "__label__social_life": 0.00020825862884521484, "__label__software": 0.061004638671875, "__label__software_dev": 0.8388671875, "__label__sports_fitness": 0.00029754638671875, "__label__transportation": 0.00035858154296875, "__label__travel": 0.0002856254577636719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29974, 0.03113]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29974, 0.80932]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29974, 0.80833]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 604, false], [604, 1235, null], [1235, 2320, null], [2320, 3193, null], [3193, 3725, null], [3725, 4825, null], [4825, 5411, null], [5411, 5427, null], [5427, 7029, null], [7029, 8183, null], [8183, 9483, null], [9483, 10512, null], [10512, 11494, null], [11494, 12024, null], [12024, 12343, null], [12343, 13166, null], [13166, 14435, null], [14435, 15159, null], [15159, 15914, null], [15914, 16067, null], [16067, 16636, null], [16636, 17281, null], [17281, 17759, null], [17759, 18908, null], [18908, 20039, null], [20039, 20071, null], [20071, 20624, null], [20624, 21093, null], [21093, 21900, null], [21900, 23143, null], [23143, 23888, null], [23888, 23968, null], [23968, 23987, null], [23987, 24006, null], [24006, 24131, null], [24131, 24192, null], [24192, 24964, null], [24964, 25691, null], [25691, 26118, null], [26118, 27307, null], [27307, 28271, null], [28271, 28606, null], [28606, 29268, null], [29268, 29595, null], [29595, 29613, null], [29613, 29974, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 604, true], [604, 1235, null], [1235, 2320, null], [2320, 3193, null], [3193, 3725, null], [3725, 4825, null], [4825, 5411, null], [5411, 5427, null], [5427, 7029, null], [7029, 8183, null], [8183, 9483, null], [9483, 10512, null], [10512, 11494, null], [11494, 12024, null], [12024, 12343, null], [12343, 13166, null], [13166, 14435, null], [14435, 15159, null], [15159, 15914, null], [15914, 16067, null], [16067, 16636, null], [16636, 17281, null], [17281, 17759, null], [17759, 18908, null], [18908, 20039, null], [20039, 20071, null], [20071, 20624, null], [20624, 21093, null], [21093, 21900, null], [21900, 23143, null], [23143, 23888, null], [23888, 23968, null], [23968, 23987, null], [23987, 24006, null], [24006, 24131, null], [24131, 24192, null], [24192, 24964, null], [24964, 25691, null], [25691, 26118, null], [26118, 27307, null], [27307, 28271, null], [28271, 28606, null], [28606, 29268, null], [29268, 29595, null], [29595, 29613, null], [29613, 29974, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29974, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29974, null]], "pdf_page_numbers": [[0, 0, 1], [0, 604, 2], [604, 1235, 3], [1235, 2320, 4], [2320, 3193, 5], [3193, 3725, 6], [3725, 4825, 7], [4825, 5411, 8], [5411, 5427, 9], [5427, 7029, 10], [7029, 8183, 11], [8183, 9483, 12], [9483, 10512, 13], [10512, 11494, 14], [11494, 12024, 15], [12024, 12343, 16], [12343, 13166, 17], [13166, 14435, 18], [14435, 15159, 19], [15159, 15914, 20], [15914, 16067, 21], [16067, 16636, 22], [16636, 17281, 23], [17281, 17759, 24], [17759, 18908, 25], [18908, 20039, 26], [20039, 20071, 27], [20071, 20624, 28], [20624, 21093, 29], [21093, 21900, 30], [21900, 23143, 31], [23143, 23888, 32], [23888, 23968, 33], [23968, 23987, 34], [23987, 24006, 35], [24006, 24131, 36], [24131, 24192, 37], [24192, 24964, 38], [24964, 25691, 39], [25691, 26118, 40], [26118, 27307, 41], [27307, 28271, 42], [28271, 28606, 43], [28606, 29268, 44], [29268, 29595, 45], [29595, 29613, 46], [29613, 29974, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29974, 0.15931]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
884f06a7a771cf50290dcc96738e374e60591496
Integrating a software architecture-centric method into object-oriented analysis and design Raghvinder Sangwan a,*, Colin Neill a, Matthew Bass b, Zakaria El Houda b a Pennsylvania State University, Great Valley School of Graduate Professional Studies, 30 East Swedesford Road, Malvern, PA 19355, USA b Siemens Corporate Research, 750 College Road East, Princeton, NJ 08540, USA Received 27 November 2006; received in revised form 23 July 2007; accepted 25 July 2007 Available online 10 August 2007 Abstract The choice of methodology for the development of the architecture for software systems has a direct effect on the suitability of that architecture. If the development process is driven by the user’s functional requirements, we would expect the architecture to appropriately reflect those requirements. We would also expect other aspects not captured in the functional specification to be absent from the architecture. The same phenomenon is true in development approaches that stress the importance of systemic quality attributes or other non-functional requirements; those requirements are prominent in the resulting architecture, while other requirement types not stressed by the approach are absent. In other words, the final architecture reflects the focus of the development approach. An ideal approach, therefore, is one that incorporates all goals, expectations, and requirements: both business and technical. To accomplish this we have incorporated, into a single architectural development process, generalized Object-Oriented Analysis and Design (OOAD) methodologies with the software architecture-centric method, the Quality Attribute Workshop (QAW) and Attribute Driven Design (ADD). OOAD, while relatively intuitive, focuses heavily on functional requirements and has the benefit of semantic closeness to the problem domain making it an intuitive process with comprehensible results. Architecture-centric approaches, on the other hand, provide explicit and methodical guidance to an architect in creating systems with desirable qualities and goals. They provide minimal guidance in determining fine-grained architecture, however. The integrated approach described in this paper maximizes the benefits of the respective processes while eliminating their flaws and was applied in a eight university, global development research project with great success. A case study from that experiment is included here to demonstrate the method. © 2007 Elsevier Inc. All rights reserved. Keywords: Software architecture-centric methods; Object-Oriented Analysis and Design (OOAD); Quality Attribute Workshop (QAW); Attribute Driven Design (ADD) 1. Introduction Software architecture is defined as “... the structure or structures of the system, which comprises software elements, the externally visible properties of those elements, and the relationships among them (Bass et al., 2003).” Given this definition, every software system has an architecture; and, if one were to implement a system two different ways, each would have its own architecture. Which of the two, one might ask, is the preferred architecture? After all, both versions of the system support the same functionality. What makes one superior to the other? The answer depends on the context. If system functionality is the only important consideration and non-function requirements such as performance, maintainability and extensibility are not, for example, the clear choice may be the architecture that is cheapest to build. If, however, the consideration is an architecture that supports the creation of a product line, those non-functional requirements become critical and it may be the case that neither of the two architectures are suitable. Therefore, the suitability of an architecture is measured in terms of its fitness to purpose. * Corresponding author. Tel.: +1 610 7255354; fax: +1 610 6483377. E-mail address: rsangwan@psu.edu (R. Sangwan). Heretofore fitness to purpose has been a technical consideration where the system’s purpose is defined by the requirements specification. Considering that use cases (Cockburn, 2000) are the most popular form of requirements specification in use today (Neill and Laplante, 2003) and that it is only recently that effective approaches to documenting non-functional requirements in use cases have been proposed (Alexander, 2003; Zou and Pavlovski, 2006) it is not surprising that when the system’s fitness to purpose is related to systemic, non-functional requirements, such as memory and temporal performance, architectures designed to maximize functional cohesion can fail. This situation is further exacerbated when the individual components of a system are developed independently of one another, as has become the trend in globally distributed system development. Whereas in local development important systemic properties not explicitly identified in the architecture might still be considered by the development team, since they all have a reasonable perspective over the entire system, in distributed development scenarios this overarching perspective is not shared and thus it is easy for systemic constraints to be overlooked resulting in blown budgets for memory or temporal performance, for example (Mullick et al., 2006). To mitigate this risk we can adopt alternative approaches to architectural design where systemic properties, or quality attributes as they are called, are a primary driver. Of course, in commercial software development the real purpose of a system is described by overarching business goals, a concept that Hohmann (2003) described as the marketecture of a system – the business perspective of a system’s architecture that directly influences the technical architecture. Given this consideration it is critical that these business goals actually drive the architectural design. One approach that has shown success in this area is the combination of the Quality Attribute Workshops (Bachmann et al., 2002; Barbacci et al., 2000) and Attribute Driven Design (Bass et al., 2003). As will be demonstrated later, this approach does indeed guide the architect in creating an architecture that reflects the business goals of the system under development but with the drawback that the fine-grained design is undefined. Alternatively, the standard approaches to object-oriented analysis and design, most comprehensively expressed by Cheesman and Daniels (2001), result in fully detailed fine-grained architectures with a high degree of semantic closeness to the problem domain that make them intuitive and easily understood. There is nothing in these approaches, however, that explicitly addresses the non-functional and systemic properties of the system and so we find that these aspects are not prominent in the resulting architecture. We have, then, two alternative paradigms for architectural design: architecture centric and OOAD. Each paradigm has significant advantages, but corresponding weaknesses. We propose in this paper an approach that combines two examples of these alternatives to capture the best of both “breeds” in a single architectural development process. In brief, this process starts by capturing the business and technical goals of the system under design with QAW, then iteratively elaborates the coarse-grained architecture from a monolithic starting point through attribute driven design before applying standard OOAD techniques to determine the fine-grained architectural detail for the coarse grains generated by ADD. This integrated approach was applied in an ongoing worldwide software development project involving eight universities across four continents, and we present a case study example from this project that demonstrates the efficacy of the hybrid process. In the proceeding section the Global Studio Project will be introduced describing a software development project used as a test bed for this investigation. Section 3 uses the OOAD approach described in Cheesman and Daniels (2001) to develop the architecture for the system under consideration in this project. Section 4 uses architecture-centric methods developed at Software Engineering Institute (SEI) such as Quality Attribute Workshop (QAW) and Attribute Driven Design (ADD) for the same purpose. Section 5 proposes an integrated approach that combines activities from OOAD with those from architecture-centric methods to create architectures that adequately support the business goals. Section 6 discusses related work, and Section 7 follows with conclusions. 2. Case study: global studio project Siemens Corporate Research (SCR), in collaboration with eight universities, across four continents is currently conducting a multi-year experiment, called Global Studio Project (GSP), to gain a better understanding of the issues surrounding and the impact of various practices in managing globally distributed software development projects. At the time of this writing, the universities (shown in Table 1) contributing student teams to the development effort had completed two years of their involvement in this study. As a part of GSP, student teams from these universities are to collaboratively develop a unified management station (called MSLite) for the building automation domain that will automatically monitor and/or control the internal functions of buildings, such as heating, ventilation, air conditioning, lighting, access and safety. The intended users of MSLite are facility managers who need to operate the many (hardware) systems required to support building functions. Since there are a large number of these systems, a Field System Simulator (FSS) is used during software product development to simulate these systems. An FSS configuration file is used to create the initial configurations of the simulated systems, including their structure and the initial values of their various properties. For example, a system that monitors air conditioning on some floor of a building may have several temperature sensors at various locations, and their threshold values for how to regulate the temperature may be set to the desired levels within the FSS file. Fig. 1 illustrates this broad functional context of the MSLite system. Some of the high level functional requirements for the MSLite system are: - Manage the field systems represented in FSS - Issue commands to the field systems to change values of their properties - Define rules based on property values of field systems that trigger reactions and issue commands to field systems - Define alarm conditions similar to rules that when met trigger alarms notifying appropriate users If SCR was to commercialize the MSLite system, it would need to do so by entering new and emerging geographic markets and opening sales channel in the form of Value Added Resellers (VARs). VARs sell the software under their own brand to support hardware devices of many different manufacturers. It is clear that these business goals would have a significant effect on the architecture of the MSLite system without necessarily affecting its functionality. For example, hardware devices from many different manufacturers would need to be supported and considerations would have to be made to take language, culture and regulations of different markets into account. Trade-offs would need to be made and risks assessed to determine the extent to which the product should support these goals. Depending on the company’s comfort level with the trade-offs and risks, these goals may need to be refined, e.g. scaling back on the intended markets. All of these business decisions require input from technical staff to determine the impact of such requirements and to inform the technical staff of the importance of these systemic requirements. Too often, however, there is a disconnect between what an organization wants and what its technical team delivers. We have come across a case where a business unit wanted to create a high performing infotainment system for a luxury line of cars in a compressed time to market. The technical team was forced to distribute the development of parts of this system across geographically distributed teams to achieve the compressed schedule via parallel development efforts. When the components developed by the teams were integrated together, they blew up the memory and performance budget. While individual components were carefully crafted, not enough attention had been given to the overall system goal of achieving high performance within the given resource constraints. End result was the business unit was not able to produce the desired product. The disconnect between what was desired and what was delivered cost the company hundreds of millions of dollars spent developing the system and billions of dollars in potential revenue. Clearly, there is a need to bridge this gap. To avoid a similar situation, the teams developing MSLite would, therefore, need to pay special attention to the business goals of entering new and emerging geographic markets with the accompanying demands on modifiability and interoperation when opening new sales channels in the form of VARs. Given these business goals for the system we will now consider the two approaches to system design starting with the typical and familiar approach to object-oriented development where the software design is based on the model of the problem domain. 3. The OOAD approach In order to create the architecture of a system, the OOAD first captures the requirements by identifying the user–system interaction at the boundary of the system under consideration. These interactions are described in the form of use cases. One of the ways of identifying use cases is to begin analyzing the business processes a system will be designed to support (Cheesman and Daniels, 2001). A business process is a sequence of steps or activities the business workers undertake to provide a service or perform a task. While the system under design could potentially automate the entire business process, typically activities within the business process that are candidates for automation have to be identified. Activities within a business process result from business events initiated by people or external systems, referred to in the Unified Modeling Language (UML) as actors. The activities triggered by a single event form a single use case and the actor responsible for the event becomes the primary initiator of the use case. Use cases are goal oriented; in other words, at the end of a use case the actor initiating the given use case walks away from the system with something of value (Cockburn, 2000). A business process can be described using UML activity diagrams. Fig. 2 shows a subset of business processes the MSLite system must support. These diagrams address a portion of the initial building automation domain. This was done to keep the illustrative analysis within a reasonable size for the objectives of this paper. Activity diagram in Fig. 2a depicts the building configuration workflow. This workflow is triggered when a facility manager receives a new building operation policy or an update to an existing policy. Depending on the policy details, a new rule may be created, an existing rule may be amended or no change may be required if the policy is already implemented by the existing rules. The rules can have different reactions such as issuing commands, generating alarms or sending notifications. In the activity diagram in Fig. 2b, the handling of alarms by a facility manager is shown. Again, depending on the alarm details, the facility manager may have to acknowledge an alarm, follow a Standard Operating Procedure (SOP) in response to the alarm or simply dismiss the alarm. UML use case diagrams can be used for depicting the use cases for a system. Fig. 3 shows a subset of use cases and actors identified from the analysis of some of the business process to be supported by MSLite including the ones described by the activity diagrams in Fig. 2. These use cases are grouped into four use case packages. Use Case Package 100 (UCP100) is concerned with configuring the building operations and contains use cases for defining Automation rules, Alarms and Standard Operating Procedures. UCP200 covers some aspects of monitoring of the building health and includes use cases for Issuing commands to field devices and Handling Alarms and their lifecycle. UCP300 addresses the Personalization of the system by its operators (facility managers). Finally, UCP400 manages the interaction with field systems. It contains use cases for handling events originating from field systems which include for example changes of some field system property value and failure reports. Fig. 2. Activity diagrams showing business processes; (a) shows activities related to configuring building operations and (b) shows activities related to monitoring the health of the building. The business process descriptions in Fig. 2 introduce some significant concepts from the building automation domain such as alarms, rules, commands, SOP, etc. These concepts are significant because they represent the business information/entities that get created, destroyed, associated and used in various ways within a use case in order to achieve something of value. Therefore, an additional important artifact in OOAD is a business concept model that captures these significant business terms and the relationships among them (Larman, 2004). Fig. 4 shows this model for the MSLite system using the UML class diagram. The business concept model of the MSLite system identifies a collection of domain entities and qualifies the way they interact or are related to each other using associations. It is, for example, possible to see that Field Systems contain field devices of different types and are themselves linked by a network. A field device (such as a humidity detector) is located at a physical location and contains properties of various types which “read” the environmental conditions at that location. The use case model and the business concepts model serve as inputs for specifying the components for a system that become a basis for its architecture (Jacobson et al., 1999). In order to specify the components, we must first identify the interfaces they support. The use cases being at the boundary of a system help identify the system interfaces. The business concepts representing entities utilized by the use cases help identify business interfaces used for managing these entities (Larman, 2004). Fig. 5 shows the system interfaces and business interfaces for the MSLite system. In Fig. 5a system interfaces and their corresponding use cases are shown. Initially, methods for these interfaces are extracted from the use case steps. Fig. 5b shows the business interfaces. The process for obtaining these interfaces... starts by refining the business concepts into core business types. We identify core business types as business concepts in the Business Concept Model (see Fig. 4) that have no mandatory associations. A business interface is then created for each core type. The resulting model shown in Fig. 5b indicates the core types using the “core” stereotype. Once the interfaces have been identified, each interface (system or business) could be allocated to a single component or a single component could support multiple interfaces. This is where OOAD does not provide firm guidelines. The allocation of interfaces is primarily driven by the principles of abstraction, encapsulation, and separation of concerns that result in loosely coupled and highly cohesive components. While cohesion comes in many forms, the dominant form for most developers is that of functional closeness of the class members (Schach, 2006). One, therefore, is mainly performing functional decomposition of the system at this point with very little focus on its business goals or other quality attributes. Fig. 6 shows the system and business components for MSLite. System components shown in Fig. 6a were obtained by regrouping interfaces dealing with the same functional aspects of the system. The diagram shows business interface dependencies to the left of each component. In Fig. 6b, a business component is created per business interface. Once the initial component specifications, their supported interfaces and their interface dependencies have been identified, a component specification architecture for a system can be created such as the one shown for the --- 1 The exception is made for associations with categorizing types. Refer to Chesman and Daniels, 2001 chapter 5 for a more detailed description of the process applied. Fig. 5. (a) System interfaces identified from use case model, and (b) business interfaces identified from business concepts model. Fig. 6. (a) System components, and (b) business components for MSLite system. MSLite System in Fig. 7. This results from combining system components and business components from Fig. 6. It should be noted that we have limited our analysis and design to a small fraction of the MSLite system. In reality there are many more business and system components than those shown in Fig. 7. The architecture obtained in Fig. 7 reflects the use case driven nature of OOAD. The system components were created by aggregating functionally cohesive use cases. It also shows OOAD’s closeness to the model of the problem domain. The business components were motivated by identifying core entities in the business concepts model. Although this approach does lead to architectures with loosely coupled and highly cohesive components that are easy to understand due to their semantic closeness to the problem domain, it is predominantly using functional decomposition with very little focus on the business goals or quality attributes. In the given example, for instance, the quality attribute requirements associated with supporting new and emerging geographic markets, and different VARS are not explicitly accommodated in this architecture. These requirements call for modifiability concerns such as adding a new hardware device or supporting a new language but without further refinement and the introduction of adaptors and factories such changes to the architecture later in the development lifecycle would be costly. Moreover, the development approach itself does not highlight such non-functional requirements so possible accommodations are not usually even considered. 4. The architecture-centric approach In contrast to the OOAD approach of the previous section, architecture-centric approaches focus on systemic properties that the software architecture must embody. Factors that influence the architecture, therefore, tend to be the quality attributes such as performance, modifiability, security and reliability (Bass et al., 2003). Consequently, these quality attribute requirements become a starting point for the architecture-centric methods. Of course, these requirements must provide sufficient detail in order to be truly useful. For instance, it may not be sufficient to say that a system must be modifiable. Any system is modifiable with respect to something, and a system can be modified with respect to any aspect given enough time and money. The question is modifiable with respect to what, when and with how much effort? Since they are the drivers for the architectural decisions, the first task is to determine the important systemic properties. This is done with Quality Attribute Workshops (Bachmann et al., 2002; Barbacci et al., 2000) – an architecture-centric method for eliciting quality attribute requirements from the stakeholders of a given system. The goal of this method is to establish a prioritized set of architecturally significant requirements in the form of quality attribute scenarios that are mapped to the business goals. Clearly, it is important these goals are known before the workshop can be conducted even if they are initially very general and will need subsequent refinement. The MSLite system has the following business goals: BG1: In order to succeed in the Value Added Resellers market, the system must be able to support hardware devices from different manufacturers. This includes existing and to some extent future devices. BG2: It must be possible to modify the system to support different languages, cultures and regulations. As the first step, these goals can be further refined as follows: ![Fig. 7. Components specification architecture for MSLite system.](image-url) BG2.1: The system must allow changing all user interactions language to a language of choice. This includes languages with non-Latin characters and scripts written from right to left. BG2.2: The field devices supported by the system can use different units. These units can be different from the units used by the user when specifying automation rules thresholds and commands. The system must be able to make all required conversions for rule evaluation and commands without errors and without user intervention. BG2.3: Certain regulations and certifications require all life critical systems such as fire alarms and intrusion detection systems to operate within specific latency constraints. The system must be able to meet these latency requirements with a sufficient margin. The next step is to link these business goals to the corresponding quality attributes as shown in Table 2. The table also shows tactics (Bass et al., 2003) that can be used for addressing quality attribute requirements when elaborating the architecture for the MSLite system. Finally, for each quality attribute, the scenarios characterizing the corresponding quality attribute requirements are summarized in Table 3. This table also shows a priority evaluation for each scenario. The first value represents the importance of the scenario to the stakeholders. The second is an evaluation by the architecture team of the difficulty to implement that scenario. Scenarios that are a high priority (H) to the stakeholders and have a high (H) degree of difficulty in implementation will be addressed before those with low priority (L) and low (L) degree of difficulty. M represents medium priority and medium degree of difficulty. After the architecturally significant requirements have been elicited, the architecture that meets these requirements is elaborated using attribute driven design (ADD) approach (Bass et al., 2003). 4.1. Architecture elaboration The architecture elaboration approach we use as part of ADD is an iterative process. ADD starts by treating a system as a single monolithic component responsible for all of ### Table 2 <table> <thead> <tr> <th>Business goal</th> <th>Quality attribute</th> <th>Tactics and tactic categories</th> </tr> </thead> <tbody> <tr> <td>BG 1</td> <td>Modifiability</td> <td>Localize change</td> </tr> <tr> <td>BG 2.1</td> <td>Modifiability</td> <td>– Anticipate expected changes</td> </tr> <tr> <td>BG 2.2</td> <td>Modifiability</td> <td>– Generalize module</td> </tr> <tr> <td></td> <td></td> <td>Prevention of ripple effect</td> </tr> <tr> <td></td> <td></td> <td>– Use an intermediary</td> </tr> <tr> <td></td> <td></td> <td>– Maintain existing interfaces</td> </tr> <tr> <td></td> <td></td> <td>– Hide information</td> </tr> <tr> <td></td> <td></td> <td>Defer binding time</td> </tr> <tr> <td></td> <td></td> <td>– Runtime registration</td> </tr> <tr> <td></td> <td></td> <td>Resource demand</td> </tr> <tr> <td></td> <td></td> <td>– Increase computational efficiency</td> </tr> <tr> <td></td> <td></td> <td>– Reduce computational overhead</td> </tr> <tr> <td></td> <td></td> <td>Resource management</td> </tr> <tr> <td></td> <td></td> <td>– Introduce concurrency</td> </tr> <tr> <td></td> <td></td> <td>Maintain multiple copies</td> </tr> </tbody> </table> ### Table 3 <table> <thead> <tr> <th>Quality attribute characterization</th> <th>Attribute scenarios</th> <th>Priority</th> </tr> </thead> <tbody> <tr> <td><strong>Modifiability/Extensibility</strong></td> <td></td> <td></td> </tr> <tr> <td>Support for new field device system</td> <td>E1. Support for a new Field Device System offering functionality comparable to the field system simulator must be added. The configuration information and details of the interface (calling conventions, method names, etc.) are in a different format. A team of two developers reasonably experienced with C# extends MSLite to support the new system in 320 person hours (40 h per week and person, 4 weeks)</td> <td>(H, H)</td> </tr> <tr> <td><strong>International Language Support</strong></td> <td>E2. A new language needs to be supported by the system. No code modification is required. A developer reasonably familiar with the system is able to package a version of the system with the new language in 80 person hours (40 h per week and person, 2 weeks) excluding string translation time</td> <td>(H, M)</td> </tr> <tr> <td>Non-standard units support</td> <td>E3. A new field device system using non-SI units is connected to the system. A system administrator configures the system to handle the new units in less than 3 h</td> <td>(H, M)</td> </tr> <tr> <td><strong>Performance</strong></td> <td></td> <td></td> </tr> <tr> <td>Latency of event propagation</td> <td>P1. A field system detects a change of a property value and notifies MSLite. The system operates under normal conditions. The value is updated on all user screens that currently display the property value within 3 s. The time durations specified in this scenario are performance goals and not hard deadlines</td> <td>(H, H)</td> </tr> <tr> <td>Latency of alarm propagation</td> <td>P2. An event which should trigger an alarm is generated in a field device. The system operates under normal conditions. The alarm is displayed on the user interfaces of all users that must receive the alarm within 3 s after the generation of the event</td> <td>(H, H)</td> </tr> </tbody> </table> Notes: - Normal conditions are specified as follows: - Number of concurrent sessions connected to MSLite < 15. - Change of value (COV) rate < 600 per minute (30 field object properties at 20 COVs/min). - Active automation rules = 50. the system functionality. It then recursively decomposes the system by applying architectural tactics successively to satisfy each quality attribute requirement. Very frequently, applying multiple tactics implies taking conflicting design decisions. This is why the end result of the architecture elaboration process is a compromise directly reflecting the quality attributes prioritization. Initially, the system consists of a single component responsible for all the functionality to be implemented. This component is shown in Fig. 8 along with the common legend which will be used for all the component and connector diagrams produced during the elaboration process. From the business goals linked to modifiability, we can observe that a primary variation dimension is the type and number of field device systems the MSLite system will have to interact with. In anticipation of these changes, the decomposition of the system attempts to minimize the number of components having a syntactic dependency on the field systems. This is achieved by introducing an adapter for each field system. The anticipation of expected changes tactic by itself has minimal benefit in reducing ripple effects when adding support for a new field system because it does not take into account indirect dependencies. We use two additional sub-tactics to minimize propagation of change. First we specify a standard interface to be exposed by all adapters (maintain existing interfaces). Additionally, we use the adapter as an intermediary responsible for semantic translation (when possible). This translation covers for example the unit conversions mentioned in business goal 2.2. Fig. 9 depicts the system after the introduction of the adapters. Despite applying the tactics mentioned above, the MSLite server is still sensitive to a change in the number of field devices it is connected to, and must include logic to route commands and data to and from the correct adapter. Hiding information, another modifiability sub-tactic, is introduced to further limit the ripple effect when adding/removing field systems. This is done by introducing the concept of a Virtual Field System Simulator (VFSS). The VFSS hides information about the number and type of field systems actually connected. For all other components of the MSLite System, there is practically one field system to interact with at all times. The result of applying this tactic can be seen in Fig. 10. At this point, we have applied most of the modifiability tactics identified in Table 1 which address field system var- iability. By doing this, we have actively included the ability to fulfill business goals 1 and 2.2 in the architecture elaboration. Other types of variability will be included further in the process. In order to support business goal 2.3, performance tactics will be applied next. The first tactic belongs to the resource management category and relies on introducing concurrency to reduce delays attributable to “blocked time”. The evaluation of automation rules is a prime candidate for concurrency since it is a computationally intensive process with a fairly low and predictable amount of communication. We therefore move the responsibility of rule evaluation and execution, and alarm generation, respectively to a separate Logic and Reaction (L&R) engine component and an Alarm engine component. These components running outside the MSLite Server context can be easily moved to dedicated execution nodes if necessary. Concurrency was also used inside these engines to perform simultaneous rule evaluations with the help of thread pools. This second application is however not visible at the level at which Fig. 11 shows the new structure of the system. It should be mentioned that the L&R and Alarm components may give the reader an impression that they are identical to the RulesSystem and AlarmSystem components in the architecture produced via OOAD. That is, however, not the case; the appearance of similarity comes from the way these components are named. We limited ourselves to the vocabulary of the domain when naming components rather than concocting artificial names. The only business goal not currently incorporated in the architecture is business goal 2.1 which focuses on modifiability of the user interface. The modifiability tactic chosen to support this business goal is the anticipation of expected changes and their localization in a separate user interface presentation module. The separation of the user interface from the rest of the application is also classified as a usability tactic in Bass et al., 2003. It can be implemented using a variety of architectural patterns. In our case we chose a variant of the Model View Controller (MVC) pattern. The new state of the system’s component and connector view can be seen in Fig. 12. By examining the system structure in Fig. 12, it can be seen that every time the L&R, the Alarm or the Presentation component needs a value from a field device, it needs to make a call traversing multiple components all the way to the field systems. Since crossing component boundaries typically introduces computational overhead, and because the querying latency of field systems is a given constraint over which we have no control, we introduce a performance tactic relying on maintaining multiple copies of data to improve device querying performance. This is achieved by using the value cache component seen in Fig. 13. This cache provides field device property values to the other system components, saving part of the performance cost incurred when querying the actual field devices. The performance gains are seen because we reduce the number of process and machine boundaries traversed for each query. ![Fig. 12. Applying the separation of user interface tactic.](image-url) The architecture obtained in Fig. 13 reflects the focus of the architecture-centric approach on systemic properties that the software architecture must embody. These systemic properties were used as a starting point for creating the architecture. Although this approach does lead to architectures that are more robust when a system’s fitness to purpose is related to systemic or non-functional requirements, it does not address how subsequent design such as the process of identifying component interfaces and their respective operations must occur. It should also be noted that in order to generate quality attribute scenarios for the architecture-centric approach some understanding of the overall functional requirements of the system is necessary. So activities similar to OOAD that establish a use case model and business concepts model (illustrated in Figs. 2–4) must also take place. 5. An integrated approach In the previous sections we have explored the architectural analysis and design of the MSLite system and showed that using the ADD approach we arrive at an architecture that supports the business and mission goals of the application, but leaves the fine-grained design details unspecified. Correspondingly, the traditional OOAD approach arrives at a final design that includes these fine-grained, class-level, details, but these are distributed across an architecture that reflects an emphasis on functional cohesion rather than fundamental business goals. Ideally then we would prefer to merge the two approaches to arrive at a final architecture that simultaneously meets business goals and provides sufficient detail for implementation. Fig. 14 presents a process workflow for such an integrated approach that was used for developing the MSLite system. It should be noted that this is a partial view putting with emphasis on synergy points between the two methods. We voluntarily omit from the figure subsequent activities concerned with constraint specification, provisioning, etc. as they lie beyond the scope of the discussion. Since quality attributes are central to creating an architecture, the architecture derived from architecture-centric methods should form the basis of design and implementation of the system under consideration. In the integrated approach the analysis and domain modeling activities in OOAD, such as the use case and business concepts modeling, that provide a broad understanding of the functional requirements, were performed concurrently and iteratively with activities in the architecture-centric approach that provide an understanding of the quality attribute requirements of the system. The quality attribute requirements were then used for further elaboration of the architecture. As architectures produced in this manner are high level models, detailed design and implementation of the components and their connectors was carried out using OOAD. Such a synergy between the OOAD and architecture-centric approaches provides linkage from high level models to the source code that is important for preserving the integrity of the architectural design as the system evolves (Garlan, 2000). For the MSLite system, we carried out OOAD activities illustrated in Figs. 2–4 concurrently and iteratively with the architecture-centric activities illustrated in Tables 1 and 2. The OOAD activities took a broad and shallow approach in creating a fairly comprehensive list of the functionality to be supported by the system expressed in terms of the key concepts captured in the business concepts model. These became input for the quality attribute scenarios elaborated during the QAW. It is important to note that this broad and shallow approach implies enumerating most of the functional requirements but elaborating only the most important (for example, architecturally significant) ones. This is in essence an iterative and incremental approach as opposed to a waterfall approach. With this understanding, the architecture was elaborated as shown in Figs. 8–13 using ADD. We started with the system as a black box and continued to decompose it using a prioritized list of quality attribute requirements obtained from the QAW. As particular decomposition of a system may conflict with a prior one and some trade-off may be necessary to manage the competing requirements. In case of the MSLite system, after focusing on modifiability, we introduced a number of performance tactics which resulted in the creation of multiple components as shown in Fig. 13. Based on the structure of these components and the type of their connectors, we predicted that some changes to the virtual FSS have the potential to prop- --- **Fig. 14. Process workflow for an integrated approach.** agate to five other components (L&R, Alarm, Presentation, Cache and MSLite Server). This would be particularly damaging to the modifiability value of the system. As explained in Bass et al. (2003), the end architecture must strike the right balance in the classic Performance/Modifiability trade-off. The Publish–Subscribe bus is a component we introduced to implement three modifiability tactics. First it alleviated the syntactic dependencies of inter-component calls by acting as a standard interface intermediary. Second, using the module generalization tactic it was made invariant to the type of events it transports. This generalization allowed new types of events to be transported with no modification to the Publish–Subscribe component. Finally, it relied on runtime registration to allow system extensibility by adding publishers and subscribers. The state of the system after the introduction of the Publish–Subscribe component is shown in Fig. 15. The tactics we applied next were not explicitly stated in the subset of business goals mentioned earlier but are essential non-functional requirements. We briefly mention them here for completeness. Security requirements of the system were met by introducing user authentication in the access control module. “Buildability” was improved by delegating data persistence to an external commercially available database system. The final iteration of tactic application produced the system structure shown in Fig. 16. This decomposition became the basis for detailed design and implementation of the components and connectors shown. It should be noted that the elaboration approach using ADD can also be applied recursively to the components in Fig. 16 for creating their respective internal architecture. We show this for the Logic and Reaction Engine in Fig. 17. This runtime component and connector view of the Logic and Reaction Engine, shows the Rule Cache, Coordinator, Evaluator, PropertyMapper, Subscription Manager, Command Dispatcher and Event Queue components. When events are received, the Coordinator uses the Property Mapper to identify the rules to be evaluated and notifies an evaluator. The Evaluator retrieves the rule details from the Rules Cache and communicates the resulting commands to the Command Dispatcher. By using concurrency (thread pools) the component is able to achieve different types of performance gains. Some of these benefits however are mostly visible when multiple computational nodes are available. The corresponding static structure is depicted using a design class diagram in Fig. 18. The class structure in Fig. 18 is shown in an intermediary state where main methods and attributes were identified and a reduced set of generalizations was applied. At ![Fig. 15. Revisiting modifiability.](image-url) this level, it is possible to introduce more design tactics and patterns in iterative refinements. This is also the level at which we were able to use results from the OOAD analysis of the domain. To illustrate this connection, Fig. 19 shows the elements from the static structure of Fig. 18 which have associations with business domain types. These business types were derived from the business concepts model as illustrated in Fig. 4 and have a grayed background for differentiation. The reference OOAD methodology we used so far offered a systematic process for identifying system and business interfaces and discovering their respective operations. At the specification level, a significant portion of these interfaces is independent of the architectural decisions since it derives mainly from the domain and requirement analysis. In our integrated approach, we relied on these interfaces for specifying the responsibilities of components and connectors. Fig. 18. Design class diagram for the logic and reaction engine. Fig. 19. Partial design class diagram for the logic and reaction engine, with business domain type associations. The final output of the integrated approach was a component architecture with its related design decisions traceable to the business goals, a substantial specification for business and system interfaces, and a mapping of these interfaces to the components and connectors of the architecture. Based upon the experiences of the Global Studio Project in implementing this integrated approach the following lessons-learned were reported (Mullick et al., 2006): 1. An architecture developed while requirements are changing is unstable, and for large-scale distributed development this instability results in frequent re-planning. For example, new dependencies arose such that one team required the definition of a portion of the object model under development by another team, sometimes at a later date. This disrupted the work plan and required significant architectural rework. 2. When work packages are determined from an architecture derived using OOAD the project task dependencies that arise from systemic properties such as memory footprint and performance budgets are not adequately investigated. This leads to duplicated work effort, conflicting solutions from different teams, and integration problems requiring rework. To overcome these problems more upfront effort was required. To allow a greater understanding of temporal dependencies between tasks, and to generally improve understanding of the various dependencies between components (beyond functional dependencies) more centralized architectural development was instituted that addressed more concerns than merely functional requirements. Indeed, the very formal functional specifications from the first year were replaced with more textual requirements that augmented a more detailed architecture reflecting the new focus on systemic quality attributes. 6. Related work The role of functional and non-functional requirements in achieving architectures fit for their intended purpose has been widely recognized. There is, however, a general lack of a design methodology that supports functional and non-functional requirements in an integrated manner (Cortellessa et al., 2005; Paech et al., 2002; Peraire et al., 1999; Robbins et al., 1998). A number of software architecture design methods, such as IBM’s Rational Unified Process (Kruchten, 2004), Philips’ BAPO/CAFCR (America et al., 2003), SEI’s Attribute Driven Design (Bass et al., 2003) and Siemens’ Four Views (Hofmeister et al., 2000), provide guidance for (macro) architecture analysis and design taking into account architecturally significant quality attribute requirements with minimal guidance for fine-grained (micro) architecture. We provide a summary of these methods in Table 4 (a detailed comparative analysis appears in Hofmeister et al. (2005)). Of these, only the Rational Unified Process (RUP) has been used most widely in conjunction with OOAD (Larman, 2004). Unlike our approach, however, it relies on use case analysis for identifying architecturally significant requirements used in creating a baseline architecture for the system under design. While quality related information can be described along with use cases (Alexander, 2003; Zou and Pavlovski, 2006), to capture certain system level quality attributes that may precisely be the properties needed to design the architecture of the system may be difficult (Garlan, 2000; Kazman et al., 2004). For example, consider the following: “An improved COTS3 discrete event generator product is available for the system, and the system permits engineers to remove the old discrete event generator and incorporate the new one in less than two person-weeks.” While not impossible, expressing this requirement as a use case and elaborating it into system functions and their corresponding special requirements that help achieve this feature may be awkward. RUP divides the development lifecycle of a software system into four phases – inception, elaboration, construction and transition. The baseline architecture is created during the elaboration phase at the beginning of which only a small fraction of use cases have been analyzed. Therefore, identifying architecturally significant use cases when most use cases are not fully understood can be challenging. An additional criterion RUP uses for selecting use cases that are architecturally significant is based on whether or not a use case exercises most of the system under consider- <table> <thead> <tr> <th>Method</th> <th>Architecture analysis</th> <th>Architecture design</th> </tr> </thead> <tbody> <tr> <td>Rational unified process</td> <td>Identify use cases that are architecturally significant</td> <td>Build an architectural prototype</td> </tr> <tr> <td>(Kruchten, 2004)</td> <td></td> <td></td> </tr> <tr> <td>BAPO/CAFCR</td> <td></td> <td></td> </tr> <tr> <td>America et al. (2003)</td> <td>Identify elements in the business, process and organization context that are relevant to the architecture</td> <td>Elaborate five CAFCR views, adding or refining artifacts (documents, models, code, etc.) suitable for a particular system</td> </tr> <tr> <td>Attribute driven design</td> <td>Identify architectural drivers that stakeholders have prioritized according to business and mission goals</td> <td>Recursively decompose the system using architectural patterns and tactics that satisfy the architectural drivers</td> </tr> <tr> <td>(Bass et al., 2003)</td> <td></td> <td></td> </tr> <tr> <td>Siemens four views</td> <td>Identify organizational, technological and product factors that influence the architecture</td> <td>Make design decisions based on solution strategies identified for the influencing factors</td> </tr> <tr> <td>Hofmeister et al. (2000)</td> <td></td> <td></td> </tr> </tbody> </table> ation. The problem with this is that early in the elaboration phase the system exists only as an evolutionary prototype. Our work explores an approach dealing with functional and non-functional requirements in an integrated manner that draws on the strengths of the different design methodologies. We chose OOAD for its strengths in analysis and design of a system’s micro-architecture and integrated it with ADD, an approach that most clearly articulates creation of the macro-architecture of a system by recursively applying architectural patterns and tactics. Since ADD requires architectural drivers as input but does not say how these are obtained, we chose QAW (Bachmann et al., 2002; Barbacci et al., 2000) for gathering architecturally significant requirements prioritized by stakeholders according to business and mission goals of the system under design. 7. Conclusions Every system has a rationale for its creation. This rationale takes the form of business goals set forth by the organization creating the system and has a strong influence on the architecture of the system under consideration. In this paper we describe an architecture development process that takes these business drivers into consideration in the determination of the coarse-grained system components, and the appropriate separate of concerns, while still providing the necessary guidance to determine the fine-grained architectural detail. We do this by integrating architecture-centric methods that derive the systemic properties or quality attributes that guide the architecture of a system from stated business goals with a generalized OOAD approach. The rationale for this combination is straightforward. OOAD strives for semantic closeness to the domain and seeks to maximize functional cohesion. To achieve these approaches focus on the principles of abstraction, encapsulation, information hiding and separation of concerns to define the structure of the system. In contrast, architecture-centric methods use architectural tactics associated with systemic properties as a guide for decomposing a system. When systemic properties are critical the architectures designed to maximize functional cohesion can fail. To demonstrate the integrated approach, we described an example of its application in a multi-university global development study. The particular case analyzed was for a system called MSLite from the building automation domain. Through this case study we demonstrated the differences between OOAD and architecture-centric approaches, followed by the process and outcomes of the integrated approach that provides the benefits of both. Using architecture-centric methods we were clearly able to take into account the business goals and their related quality attributes in formulating a high level architecture. The analysis activities of OOAD provided a broad functional understanding of the system that served as input to the quality attribute scenarios used for the high level architecture. This architecture was then used as a basis for doing further detailed design and implementation using OOAD. References Cockburn, A., 2000. Writing Effective Use Cases. Addison-Wesley, Boston, MA.
{"Source-Url": "https://www.researchgate.net/profile/Raghvinder_Sangwan/publication/223899028_Integrating_a_software_architecture-centric_method_into_object-oriented_analysis_and_design/links/0deec5330bd10c3c06000000.pdf?origin=publication_list", "len_cl100k_base": 10380, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 44044, "total-output-tokens": 12207, "length": "2e13", "weborganizer": {"__label__adult": 0.00041556358337402344, "__label__art_design": 0.0015134811401367188, "__label__crime_law": 0.0003123283386230469, "__label__education_jobs": 0.0013513565063476562, "__label__entertainment": 7.963180541992188e-05, "__label__fashion_beauty": 0.0001882314682006836, "__label__finance_business": 0.0002694129943847656, "__label__food_dining": 0.0003743171691894531, "__label__games": 0.0006375312805175781, "__label__hardware": 0.0008039474487304688, "__label__health": 0.0003962516784667969, "__label__history": 0.000396728515625, "__label__home_hobbies": 9.28044319152832e-05, "__label__industrial": 0.000492095947265625, "__label__literature": 0.0003371238708496094, "__label__politics": 0.0002834796905517578, "__label__religion": 0.0006489753723144531, "__label__science_tech": 0.0171966552734375, "__label__social_life": 7.748603820800781e-05, "__label__software": 0.00441741943359375, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.0003345012664794922, "__label__transportation": 0.0006084442138671875, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58543, 0.02091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58543, 0.39859]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58543, 0.92068]], "google_gemma-3-12b-it_contains_pii": [[0, 3939, false], [3939, 8975, null], [8975, 14298, null], [14298, 16911, null], [16911, 18849, null], [18849, 20658, null], [20658, 20868, null], [20868, 24511, null], [24511, 30037, null], [30037, 32604, null], [32604, 32804, null], [32804, 35849, null], [35849, 38646, null], [38646, 40576, null], [40576, 43382, null], [43382, 44342, null], [44342, 44521, null], [44521, 51156, null], [51156, 58074, null], [58074, 58543, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3939, true], [3939, 8975, null], [8975, 14298, null], [14298, 16911, null], [16911, 18849, null], [18849, 20658, null], [20658, 20868, null], [20868, 24511, null], [24511, 30037, null], [30037, 32604, null], [32604, 32804, null], [32804, 35849, null], [35849, 38646, null], [38646, 40576, null], [40576, 43382, null], [43382, 44342, null], [44342, 44521, null], [44521, 51156, null], [51156, 58074, null], [58074, 58543, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58543, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58543, null]], "pdf_page_numbers": [[0, 3939, 1], [3939, 8975, 2], [8975, 14298, 3], [14298, 16911, 4], [16911, 18849, 5], [18849, 20658, 6], [20658, 20868, 7], [20868, 24511, 8], [24511, 30037, 9], [30037, 32604, 10], [32604, 32804, 11], [32804, 35849, 12], [35849, 38646, 13], [38646, 40576, 14], [40576, 43382, 15], [43382, 44342, 16], [44342, 44521, 17], [44521, 51156, 18], [51156, 58074, 19], [58074, 58543, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58543, 0.19672]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
c4da2ed9c6d9870a82f3d8253d34a9c6eb04c690
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00531170/file/IDB10a.pdf", "len_cl100k_base": 11591, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 57378, "total-output-tokens": 14416, "length": "2e13", "weborganizer": {"__label__adult": 0.0003714561462402344, "__label__art_design": 0.0004301071166992187, "__label__crime_law": 0.0005745887756347656, "__label__education_jobs": 0.00406646728515625, "__label__entertainment": 0.00013446807861328125, "__label__fashion_beauty": 0.00022614002227783203, "__label__finance_business": 0.0008521080017089844, "__label__food_dining": 0.0004856586456298828, "__label__games": 0.0010862350463867188, "__label__hardware": 0.0011005401611328125, "__label__health": 0.000957012176513672, "__label__history": 0.0005240440368652344, "__label__home_hobbies": 0.00024306774139404297, "__label__industrial": 0.0013055801391601562, "__label__literature": 0.0004763603210449219, "__label__politics": 0.00046944618225097656, "__label__religion": 0.0006399154663085938, "__label__science_tech": 0.3916015625, "__label__social_life": 0.0002007484436035156, "__label__software": 0.04705810546875, "__label__software_dev": 0.5458984375, "__label__sports_fitness": 0.0003376007080078125, "__label__transportation": 0.0005550384521484375, "__label__travel": 0.0002529621124267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51537, 0.03908]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51537, 0.43262]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51537, 0.88421]], "google_gemma-3-12b-it_contains_pii": [[0, 1076, false], [1076, 2721, null], [2721, 5919, null], [5919, 8404, null], [8404, 10643, null], [10643, 12435, null], [12435, 14484, null], [14484, 16522, null], [16522, 17837, null], [17837, 20067, null], [20067, 22036, null], [22036, 23699, null], [23699, 26184, null], [26184, 28741, null], [28741, 30567, null], [30567, 32379, null], [32379, 34304, null], [34304, 36268, null], [36268, 38039, null], [38039, 40289, null], [40289, 41450, null], [41450, 44041, null], [44041, 44959, null], [44959, 47720, null], [47720, 51107, null], [51107, 51537, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1076, true], [1076, 2721, null], [2721, 5919, null], [5919, 8404, null], [8404, 10643, null], [10643, 12435, null], [12435, 14484, null], [14484, 16522, null], [16522, 17837, null], [17837, 20067, null], [20067, 22036, null], [22036, 23699, null], [23699, 26184, null], [26184, 28741, null], [28741, 30567, null], [30567, 32379, null], [32379, 34304, null], [34304, 36268, null], [36268, 38039, null], [38039, 40289, null], [40289, 41450, null], [41450, 44041, null], [44041, 44959, null], [44959, 47720, null], [47720, 51107, null], [51107, 51537, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51537, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51537, null]], "pdf_page_numbers": [[0, 1076, 1], [1076, 2721, 2], [2721, 5919, 3], [5919, 8404, 4], [8404, 10643, 5], [10643, 12435, 6], [12435, 14484, 7], [14484, 16522, 8], [16522, 17837, 9], [17837, 20067, 10], [20067, 22036, 11], [22036, 23699, 12], [23699, 26184, 13], [26184, 28741, 14], [28741, 30567, 15], [30567, 32379, 16], [32379, 34304, 17], [34304, 36268, 18], [36268, 38039, 19], [38039, 40289, 20], [40289, 41450, 21], [41450, 44041, 22], [44041, 44959, 23], [44959, 47720, 24], [47720, 51107, 25], [51107, 51537, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51537, 0.04523]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
665c527f9059d123f15fb20fe31dd69a95c7dd2e
VIII A Prototype Theorem Prover Reasoning is the ability to make inferences, and automated reasoning is concerned with the building of computing systems that automate this process. *Stanford Encyclopedia of Philosophy* In this chapter we present a prototype implementation of the systems $\text{SC}_{\text{ALC}}$ and $\text{SC}_{\text{ALCQ}}$. We choose to implement the Sequent Calculi because they represent a first step towards a ND implementations. The prototype theorem prover was implemented in Maude [18]. So in Section VIII.1 we present the Maude System and language and in Section VIII.2 we describe the prototype implementation. VIII.1 Overview of the Maude System This section presents a general overview of the main characteristics of the Maude system and language. A complete description of Maude can be found at [18]. We will only present the aspects of Maude used in our implementation. Moreover, we will not present the theory foundations of Maude in the “Rewriting logic” [47] since our implementation uses the Maude system as an interpreter for the Maude language. We did not explored any possible mapping between description logics and rewriting logic. Maude’s basic programming statements are very simple and easy to understand. They are equations and rules, and have in both cases a simple rewriting semantics in which instances of the lefthand side pattern are replaced by corresponding instances of the righthand side. Maude programs are organized in modules. Maude modules containing only equations are called functional modules. Modules containing rules are called system module. In both cases, besides equations and rules, modules may contain declarations of sorts (types), operators and variables. A functional module defines one or more functions by means of equations. Equations are used as simplification rules. Replacement of equals by equals is performed only from left to right as simplification rewriting. A function specification should have a final result and should be unique. Finally, Maude equations can be conditional, that is, they are only applied if a certain condition holds. A Maude module containing rules and possibly equations is called a system module. Rules are also computed by rewriting from left to right, but they are not equations. Instead, they are understood as local transition between states in a possibly concurrent system. For instance, a distributed banking system can be represented as account objects and messages floating in a “soup”. That is, in a multi-set or bag of objects and messages. Such objects and messages in the soup can interact locally with each other according to specific rewrite rules. The systems specified by rules can be highly concurrent and nondeterministic. Unlike for equations, there is no assumption that all rewrite sequences will lead to the same outcome. Furthermore, for some systems there may not be any final states; their whole point may be to continuously engage in interactions with their environment as reactive systems. Note that, since the Maude interpreter is sequential, the concurrent behavior is simulated by corresponding interleavings of sequential rewriting steps. Logically, when rewriting logic was used as a logical framework to represent other logics a rule specifies a logical inference rule, and rewriting steps therefore represent inference steps. Maude has two varieties of types: sorts, which correspond to well-defined data, and kinds, which may contain error elements. Sorts can be structured in subsort hierarchies, with the subsort relation understood semantically as subset inclusion. This allows support for partial functions, in the sense that a function whose application to some arguments has a kind but not a sort should be considered undefined for those arguments. Furthermore, operators can be subsort-overloaded, providing a useful form of subtype polymorphism. In Maude the user can specify operators. An operator has arguments (each one has a sort) and a result sort. Each operator has its own syntax, which can be prefix, postfix, infix, or a “mixfix” combination. This is done by indicating with underscores the places where the arguments appear in the mixfix syntax. The combination of user-definable syntax with equations and equational attributes for matching leads to a very expressive capability for specifying any user-definable data. This is one of the main reasons that makes Maude a perfect language/system for prototyping. Rewriting with both equations and rules takes place by matching a lefthand side term against the subject term to be rewritten. The most simpler matching is syntactic matching, in which the lefthand side term is matched as a tree on the (tree representation of the) subject term. Nevertheless, Maude allows also more expressive matching like “equational matching”, when we define operators in Maude we can use attributes like \texttt{assoc} (associative) and \texttt{comm} (commutative) called equational attributes. For instance, if an operator is defined with both of these attributed, terms having this operator as the principal operator (the most external one), are not matching of trees, but as multi-set, that is, modulo associativity and commutativity. In general, a binary operator declared in a Maude can be defined with any combination of the equational attributes: associativity, commutativity, left-, right-, or two-sided identity, and idempotency. A Maude system module implements a \textit{rewrite theory} that must be \textit{admissible}, which means that rules should be coherent relative to the equations [18]. If a rewrite theory contains both rules and equations, rewriting is performed modulo such equations. Maude strategy to rewriting terms is to first apply the equations to reach a canonical form, and then do a rewriting step with a rule (in a rule-fair manner). This strategy is complete if we assume coherence. Coherence means that we will not miss possible rewrites with rules that could have been performed if we had not insisted on first simplifying the term to its canonical form with the equations. Maude implicitly assumes this coherence property. **VIII.2 A Prototype Theorem Prover** In this section we present our Maude implementation of $\mathcal{SC}_{\mathcal{ALC}}$ and $\mathcal{SC}^\bot_{\mathcal{ALC}}$ sequent calculi. We will omit trivial details of the implementation and focus on the important parts. Moreover, it is important to note that this prototype is available for download at http://github.com/arademaker/SALC and also includes the implementation of $\mathcal{SC}_{\mathcal{ALC}QI}$ system an its counterpart $\mathcal{SC}^\bot_{\mathcal{ALC}QI}$. Those implementations are not described here since they do not differ considerably from the presented. (a) \textbf{The Logical Language} Due to the flexibility to specify user-definable data in Maude, the definition of the description logics $\mathcal{ALC}$ and $\mathcal{ALC}QI$ syntax was effortless. The language $\mathcal{ALC}$ is defined in the function module \texttt{SYNTAX} below. We have defined sorts for atomic concepts and atomic roles besides the sort for concepts and roles in general. The constants $\top$ and $\bot$ were also specified. \begin{verbatim} fm subsort AConcept < Concept . subsort ARole < Role . ops ALL EXIST : Role Concept -> Concept . ops CTRUE CFALSE : -> AConcept . op _&_ : Concept Concept -> Concept [ctor gather (e E) prec 31] . op _|_ : Concept Concept -> Concept [ctor gather (e E) prec 32] . eq _~ CTRUE = CFALSE . eq _~ CFALSE = CTRUE . endfm The syntax for defining operators is: op NAME : Sort-1 Sort-2 ... -> Sort [attr-1 ...] . where NAME may contain underscores to identify arguments position in infix notation. The list of sorts before -> is the arguments and the sort after is the sort of the resultant term. Since our SC\textsubscript{ACC} and SC\textsubscript{ALCQI} systems reason over labeled concepts. The next step was to extend the language with labels and some functions over them. A labeled concept \( \forall R, \exists S \alpha \) is represented by the term \(< a1(R) \ ex(S) \mid A >\) where A is a constant of the sort AConcept and R and S constants of the sort ARole. In the modules below, we show the declarations of all operators but omitted the specification of logical operators has-quant, has-lt and so on. fmod LABEL is inc SYNTAX . sorts Label ELLabel ALLabel QLabel . subsorts ELLabel ALLabel QLabel < Label . ops gt lt : Nat Role -> QLabel . op ex : Role -> ELLabel . op al : Role -> ALLabel . endfm The definition below of the operators neg and neg-aux should be clear but being the first equational specification deserves an explanation. The operator \texttt{neg}(L) operates over the list of labels \( L \) inverting all its quantifiers. In Section III.1, we represent such operation as \(~L\). We use \texttt{neg-aux} to interact over the list accumulating the result in its second argument until the first argument is completely consumed and the second argument returned. \begin{verbatim} fmod LALC-SYNTAX is inc LABEL. inc LIST{Label}. vars L1 L2 : List{Label}. vars R : Role. var C : Concept. sorts Expression LConcept. subsort LConcept < Expression. op <_>_ : List{Label} Concept --> LConcept [ctor]. ops has-quant has-lt has-gt : List{Label} --> Bool. ops has-al has-ex : List{Label} --> Bool. op neg : List{Label} --> List{Label}. op neg-aux : List{Label} List{Label} --> List{Label}. ... eq neg(L1) = neg-aux(L1, nil). eq neg-aux(L1 al(R), L2) = neg-aux(L1, ex(R) L2). eq neg-aux(L1 ex(R), L2) = neg-aux(L1, al(R) L2). eq neg-aux(nil, L2) = L2. endfm \end{verbatim} It is worth to note that this is not the only way to define \texttt{neg} in Maude, the auxiliary function is not necessary at all, but we will use them frequently in our implementation. Finally, the module LALC-SYNTAX declares the sorts \texttt{Expression} and \texttt{LConcept} (labeled concept). Expressions are labeled concepts but the distinction can be useful for future extensions of the calculi. (b) The Sequent Calculus In the function module SEQUENT-CALCULUS we implemented the generic data structures that are used by all sequent calculi. The idea is that a proof Chapter VIII. A Prototype Theorem Prover will be represented as a multi-set (“soup”) of goals and messages (operators with sort \texttt{State}). Goals are sequents with additional properties to keep the proof structure. Each goal will have an identifier (natural number), the goal origin, the name of the rule used to produce that goal, and the sequent. In this way, our proof is a graph represented as a multi-set of terms with sort \texttt{Proof}. The \texttt{goals} operator holds a list of natural numbers as its argument, the list of pending goals. The \texttt{next} operator is just an auxiliary operator that provides in each proof step the next goal identifier. \texttt{fmod SEQUENT-CALCULUS is} \begin{verbatim} inc LALC-SYNTAX . inc SET{Expression} . inc SET{Label} . ... sorts Sequent Goal State Proof . subsort Goal State < Proof . op next : Nat -> State . op goals : Set{Nat} -> State . op [\_from_by_is_] : Nat Nat qid Sequent -> Goal [ctor] . op nil : -> Proof [ctor] . op _-\|_- : Set{Expression} Set{Expression} -> Sequent [ctor prec 122 gather(e e)] . op _:-\|_-\|_- : Set{Expression} Set{Expression} Set{Expression} Set{Expression} -> Sequent [ctor prec 122 gather(e e e e)] . ... endfm \end{verbatim} We must also note that we have defined two operators\footnote{Term constructor in Maude terminology since these operators will never be reduced, they are used to hold data.} to construct sequents. The operator \texttt{-\|-} is the simplest sequent with two multi-set of expression, one on the left (sequent antecedent, possibly empty) and other on the right (sequent succedent, possibly empty), it is used to implement \texttt{SC\_ALC}. The operator \texttt{-:-\|-:-} is used by the frozen versions of \texttt{SC\_ALC} and \texttt{SC\_ALC\_QT}. The two additional external sets of expressions hold the frozen formulas. Consider the proof of the sequent $\forall R. (A \cap B) \Rightarrow \forall R. A \cap \forall R. B$ presented in Figure VIII.1. One proof constructed by our system is represented by the term below. The goal $0$ is the initial state of the proof, goals $6$ and $5$ are the initial sequents. Goal $1$ is obtained from goal $0$ applying the rule $\forall$-l. The empty argument of goals(\textit{empty}) represent the fact that this proof is complete, there is no remaining goals to be proved. goals(\textit{empty}) next(7) [0 from 0 by 'init is < nil | ALL(R, A & B) > |- < nil | ALL(R, A) & ALL(R, B) >] [1 from 0 by 'forall-l is < al(R) | A & B > |- < nil | ALL(R, A) & ALL(R, B) >] [2 from 1 by 'and-l is < al(R) | A >, < al(R) | B > |- < nil | ALL(R, A) & ALL(R, B) >] [3 from 2 by 'and-r is < al(R) | A >, < al(R) | B > |- < nil | ALL(R, A) >] [4 from 2 by 'and-r is < al(R) | A >, < al(R) | B > |- < nil | ALL(R, B) >] [5 from 3 by 'forall-r is < al(R) | A >, < al(R) | B > |- < al(R) | A >] [6 from 4 by 'forall-r is < al(R) | A >, < al(R) | B > |- < al(R) | B >] Figure VIII.1: An example of a proof in the implementation of SC\textsubscript{\textit{ALC}} VIII.3 The SC\textsubscript{\textit{ALC}} System The SC\textsubscript{\textit{ALC}} system was implemented in a system module. Basically, each rule of the system is a Maude rewriting rule. The rewriting procedure construct the proof bottom-up. \begin{verbatim} mod SYSTEM is inc SEQUENT-CALCULUS . [rules and equations presented below] endm \end{verbatim} The first observation regards the structural rules of SC\textsubscript{\textit{ALC}}. Since the left and right sides of the sequents are sets of formulas, we do not need permutation of contraction rules. We also proved in Section III.4 that the cut rule was not necessary too. Nevertheless, we could lose completeness if we have omitted the weak rules. We need them to allow the promotional rules applications. Moreover, the initial sequent were implemented as an equation rather than as a rule. We used the fact that in Maude all rewriting steps with rules are executed module equational reductions. The implementation of the initial sequents using equations means that a goal detected as initial will be removed from the goals lists right aways. \[ \text{eq \{ } X \text{ from } Y \text{ by } Q \text{ is ALFA, } E \vdash \text{ } E, \text{ GAMMA } \}\text{ goals}((X, XS)) = \] \[ \text{[ } X \text{ from } Y \text{ by } Q \text{ is ALFA, } E \vdash \text{ } E, \text{ GAMMA } \text{] goals}((XS)) \] \[ \text{[label initial].} \] \[ \text{rl \{weak-l\} :} \] \[ \text{[ } X \text{ from } Y \text{ by } Q \text{ is ALFA, } E \vdash \text{ GAMMA } \text{] next(N) goals}((X, XS)) \] \[ => \] \[ \text{[ } X \text{ from } Y \text{ by } Q \text{ is ALFA, } E \vdash \text{ GAMMA } \text{] next(N + 1) goals}((XS, N)) \] \[ \text{[ } N \text{ from } X \text{ by } '\text{weak-l is ALFA \vdash } \text{ GAMMA } \text{].} \] First we note the difference between rules and equations. They are very similar expected that the former uses => and the later = as a term separator. \[ \text{rl \{label\} : term-1 => term-2 [attr-1,...].} \] \[ \text{eq term-1 = term-2 [attr-1,...].} \] We note that on each rule the goal being rewritten must be repeated in the left and right side of the rule. See weak rule above. If we omit the goal on the right side of the rule we would be removing the goal from the proof. We are actually including new goals on each step, that is, we put new goals in the “soup” of goals. Reading bottom-up, some rules create more than one (sub)-goal from a goal. This is the case of rule ⊓-r below. Besides that, whenever a rule has some additional proviso, we use Maude conditional rules to express the rule proviso in the rule condition. In the rule ⊓-r, the proviso states that in the list of labels of the principal formula all labels must be universal quantified, in $\text{SC}_{ALC}$, this is the same of saying that $L$ cannot contain existential quantified labels $(\text{has-ex}(L))$. \[ \text{crl \{and-r\} :} \] \[ \text{[ } X \text{ from } Y \text{ by } Q \text{ is ALFA } \vdash \text{ GAMMA, } < L \mid A \& B > \text{]} \] \[ \text{next(N) goals}((X, XS)) \] \[ => \] \[ \text{next(N + 2) goals}((XS, N, N + 1)) \] \[ \text{[ } X \text{ from } Y \text{ by } Q \text{ is ALFA } \vdash \text{ GAMMA, } < L \mid A \& B > \text{]} \] \[ \text{[ } N \text{ from } X \text{ by } '\text{and-r is ALFA } \vdash \text{ GAMMA, } < L \mid A > \text{]} \] Chapter VIII. A Prototype Theorem Prover \[ N + 1 \text{ from } X \text{ by } \text{`and-r is ALFA } \vdash \text{ GAMMA, } < L | B > \] if not has-ex(L). The rule condition can consist of a single statement or can be a conjunction formed with the associative connective \( \wedge \). Rule promotional-\( \exists \) has two conditions. The first, from left to right, is the rule proviso (all concepts on the left-side of the sequent must have the same most external label), the second is actually just an instantiation of the variable \text{GAMMA}' with the auxiliary operator \text{remove-label}. \text{GAMMA}' will be the right-side of the new sequent (goal) created. \text{remove-label} iterate over the concepts removing the most external label of them. crl \text{[prom-exist]} : \[ [ X \text{ from } Y \text{ by } Q \text{ is } < \text{ex}(R) L | A > \vdash \text{ GAMMA } ] \] next(N) goals((X, XS)) \[ \Rightarrow \] next(N + 1) goals((XS, N)) \[ [ X \text{ from } Y \text{ by } Q \text{ is } < \text{ex}(R) L | A > \vdash \text{ GAMMA } ] \] \[ [ N \text{ from } X \text{ by } \text{`prom-exist is } < L | A > \vdash \text{ GAMMA}' ] \] if all-label(GAMMA, ex(R)) = true \[ \wedge \text{ GAMMA}' := \text{remove-label(Gamma, ex(R), empty)} \] . The implementation of the remain rules is straightforward. We have one observation more about the rules above, the argument of \text{next}(N) gives the next goal identifier. The argument of \text{goals} holds the list of goals not solved. A derivation with \text{goals}(\text{empty}) in the “soup” is a completed proof of the sequent in the goal with identifier 0. (a) The \( \text{SC}^{\exists-\text{ALC}} \) System Implementation The system \( \text{SC}^{\exists-\text{ALC}} \) is implemented in a very similar way of \( \text{SC}_{\text{ALC}} \). The main differences are that sequents now have frozen concepts and two additional rules had to be implemented. Concepts that were frozen together will never be unfrozen separated, so that, instead of defining an operator to freeze a concept, we defined a constructor of a set of frozen concepts. \text{mod SYSTEM is} \text{inc SEQUENT-CALCULUS} . \text{...} \text{op [\_,\_,\_] : Nat Nat Set\{Expression\} \to Expression} . The constructor of frozen set of concepts has three arguments. The first argument is the context identifier (see Section IV.2) created to group the pair of sets of concepts frozen together on the sequent antecedent and succedent. The second argument is the state of the context where 0 means that the context is saved but not reduced yet (context was frozen by weak rule), and 1 means that the context was reduced (context was frozen by frozen-exchange rule). The last argument is the set of frozen concepts. Almost all rules of $\text{SC}^{\bot}_{\text{ACC}}$ do not touch in the frozen concepts. This is the case of negation rule below. We note the use of the operator neg inverting the list of labels of a concept. \begin{verbatim} rl [neg-l] : [ X from Y by Q is FALFA : ALFA, < L | ~ A > |- GAMMA : FGAMMA ] next(N) goals((X, XS)) => next(N + 1) goals((XS, N)) [ X from Y by Q is FALFA : ALFA, < L | ~ A > |- GAMMA : FGAMMA ] [ N from X by 'neg-l is FALFA : ALFA |- GAMMA, < neg(L) | A > : FGAMMA ]. \end{verbatim} The weak-r rule was implemented as a conditional rewrite rule below. The left and right-side of the sequent in goal X were frozen and added to the set of frozen concepts on the left and right side of the sequent in the new goal N. The variables FALFA and FGAMMA match the set of frozen concepts on both sides. The weak-l rule is similar. \begin{verbatim} crl [weak-r] : [ X from Y by Q is FALFA : ALFA |- GAMMA, E : FGAMMA ] next(N) goals((X, XS)) => next(N + 1) goals((XS, N)) [ X from Y by Q is FALFA : ALFA |- GAMMA, E : FGAMMA ] [ N from X by 'weak-l is (FALFA, [M:Nat, 0, ALFA]) : ALFA |- GAMMA : (FGAMMA, [M:Nat, 0, (GAMMA, E)]) ] if M:Nat := next-frozen(union(FALFA, FGAMMA)). \end{verbatim} The other $\text{SC}^{\bot}_{\text{ACC}}$ rule that modify the set of frozen concepts in a goal is the frozen-exchange rule. The Maude pattern matching mechanism was very useful in the implementation of this rule. The rule select randomly \(^2\) a \(^2\)The selection is made by pattern matching of a context module commutative and associative, thanks to the attributes of the operator comma, the constructor of Set{Expression} terms. context (sets of frozen concepts) to unfreeze – \([0: \text{Nat}, 0, \text{ES1}]\) and \([0: \text{Nat}, 0, \text{ES2}]\) – and freeze the set of formulas that are in the current context – ALFA and GAMMA. The pattern also guarantee that only contexts saved but not already reduced (second argument equals zero) will be selected. The new context created in the goal \(N\) has the second argument equals one – it is a reduced context. Maude’s pattern matching mechanism is very flexible and powerful. On the other hand, this rule does not provide much control over the choice of contexts (set of frozen formulas) that will be unfreeze. This choice can have huge impact in the performance of a proof construction. crl [frozen-exchange] : [ X from Y by Q is \([0: \text{Nat}, 0, \text{ES1}]\), FALFA : ALFA |- GAMMA : FGAMMA, \([0: \text{Nat}, 0, \text{ES2}]\) ] goals((X, XS)) next(N) => goals((XS, N)) next(N + 1) [ X from Y by Q is \([0: \text{Nat}, 0, \text{ES1}]\), FALFA : ALFA |- GAMMA : FGAMMA, \([0: \text{Nat}, 0, \text{ES2}]\) ] [ N from X by 'frozen-exchange is ([M: \text{Nat}, 1, ALFA], FALFA) : ES1 |- ES2 : (FGAMMA, \([M: \text{Nat}, 1, \text{GAMMA}]\)) ] if M: \text{Nat} := next-frozen(union([0: \text{Nat}, 0, \text{ES1}], FALFA), ([0: \text{Nat}, 0, \text{ES2}], FGAMMA))) . (b) The Interface The current user interface of the prototype is the Maude prompt. We do not provide any high level user interface yet, although different alternatives exist for it. For example, we could implement the DIG [2] interface using Maude external objects [18]. The system module \text{THEOREM-PROVER} is the main interface with the prototype. It basically declares some constants of the sort \text{AConcept} (atomic concepts) and \text{ARole} (atomic roles) and the operator \text{th.end}. This operator is a “syntax sugar” to assist the user in the creation of the proof term in its initial state ready to be rewritten. mod \text{THEOREM-PROVER} is inc SYSTEM . ops A B C D E : -> \text{AConcept} . ops R S T U V : -> \text{ARole} . Chapter VIII. A Prototype Theorem Prover op th_end : Sequent -> Goal . vars ALFA GAMMA : Set{Expression} . var SEQ : Sequent . eq th SEQ end = [ 0 from 0 by 'init is SEQ ] next(1) goals(0) . endm The module THEOREM-PROVER includes the module SYSTEM, where SYSTEM can be any of the implemented systems presented in the previous sections. With the help of the above module we can prove the theorem from Example 1 (1) using two alternatives. $$\exists \text{child}. \top \sqsubseteq \forall \text{child}. \neg (\exists \text{child}. \neg \text{Doctor}) \sqsubseteq \exists \text{child}. \forall \text{child}. \text{Doctor}$$ \hspace{1cm} (1) We can use the already declared constants assuming $A = \text{Doctor}$ and the role $R = \text{child}$ or we can declare two new constants in a module that imports THEOREM-PROVER. mod MY-TP is inc THEOREM-PROVER . op child : -> ARole . opt Doctor : -> AConcept . endm In the second case, after entering the module MY-TP in Maude, we could test the proof initialization with the Maude command reduce (red). The command rewrite the given term using only equations. In that case, only the equation of the operator th_end from module THEOREM-PROVER is applied. Maude> red th < nil | EXIST(child, CTRUE) & ALL(child, ¬ EXIST(child, ¬ Doctor)) > |- < nil | EXIST(child, ALL(child, Doctor)) > end . result Proof: next(1) goals(0) [0 from 0 by 'init is < nil | EXIST(child, CTRUE) & ALL(child, ¬ EXIST(child, ¬ Doctor)) > |- < nil | EXIST(child, ALL(child, Doctor)) > ] To construct a proof of a given sequent, we can use Maude `rewrite` or `search` command. The former will return one possible sequence of rewriting steps until a `canonical term` is reached. The latter will search for all possible paths of rewriting steps from the given initial state until the final given state. Below we present the same sequent with `Doctor` and `child` replaced by `A` and `R` respectively. As we can see, due the presence of weak rules and the lack of a strategy to control the applications of the rules, we failed to obtain a proof for a valid sequent using the command `rewrite`. ```maude Maude> rew th < nil | EXIST(R, CTRUE) & ALL(R, ~ EXIST(R, ~ A)) > |- < nil | EXIST(R, ALL(R, A)) > end . result Proof: next(3) goals(2) [0 from 0 by 'init is < nil | EXIST(R, CTRUE) & ALL(R, ~ EXIST(R, ~ A)) > |- < nil | EXIST(R, ALL(R, A)) >] [1 from 0 by 'weak-l is empty |- < nil | EXIST(R, ALL(R, A)) >] [2 from 1 by 'weak-r is empty |- end] ``` The `rewrite` command explores just one possible sequence of rewrites of a system described by a set of rewrite rules and an initial state. The `search` command allows one to explore (following a breadth-first strategy) the reachable state space in different ways. Using the `search` command we can ask for all possible proof trees that can be constructed for a given sequent. Moreover, we can limit the space search with the two optional parameters `[n,m]` where `n` providing a bound on the number of desired solutions and `m` stating the maximum depth of the search. The search arrow `=>!` indicates that only canonical final states are allowed, that is, states that cannot be further rewritten. On the left-hand side of the search arrow we have the starting term, on the right-hand side the pattern that has to be reached, in the case below: P:Proof goals(Empty). ```maude Maude> search [1,20] th < nil | EXIST(R, CTRUE) & ALL(R, ~ EXIST(R, ~ A)) > |- < nil | EXIST(R, ALL(R, A)) > end =>! P:Proof goals(Empty). P:Proof --> next(10) [0 from 0 by 'init is ``` 3A term that cannot be further rewritten. < nil | EXIST(R, CTRUE) & ALL(R, ~ EXIST(R, ~ A)) > | - < nil | EXIST(R, ALL(R, A)) >] [1 from 0 by 'and-l is < nil | ALL(R, ~ EXIST(R, ~ A)) >, < nil | EXIST(R, CTRUE) > |- < nil | EXIST(R, ALL(R, A)) >] [2 from 1 by 'forall-l is < nil | EXIST(R, CTRUE) >, < al(R) | ~ EXIST(R, ~ A) > |- < nil | EXIST(R, ALL(R, A)) >] [3 from 2 by 'neg-l is < nil | EXIST(R, CTRUE) > |- < nil | EXIST(R, ALL(R, A)) >, < ex(R) | EXIST(R, ~ A) >] [4 from 3 by 'exist-r is < nil | EXIST(R, CTRUE) > |- < ex(R) | ALL(R, A) >, < ex(R) | EXIST(R, ~ A) >] [5 from 4 by 'forall-r is < nil | EXIST(R, CTRUE) > |- < ex(R) | EXIST(R, ~ A) >, < ex(R) al(R) | A >] [6 from 5 by 'exist-r is < nil | EXIST(R, CTRUE) > |- < ex(R) ex(R) | ~ A >, < ex(R) al(R) | A >] [7 from 6 by 'exist-l is < ex(R) | CTRUE > |- < ex(R) ex(R) | ~ A >, < ex(R) al(R) | A >] [8 from 7 by 'prom-exist is < nil | CTRUE > |- < ex(R) | ~ A >, < al(R) | A >] [9 from 8 by 'neg-r is < nil | CTRUE >, < al(R) | A > |- < al(R) | A >] Above, the variable $P$ in the input pattern was bound in the result to the desired proof term, that is, the one with $goals(\emptyset)$. Since $P$ was the only variable in the pattern, the result shows only one binding. In other worlds, search results are bindings for variables in the pattern given after the search arrow. Distributed with our prototype there is a simple Maude-2-$\L_{\Pi} \text{X}$ proof terms translator developed by Caio Mello.\textsuperscript{4} The translator receives as input a term like the one above and return its representation in $\L_{\Pi} \text{X}$ using the $\L_{\Pi} \text{X}$ package bussproof\textsuperscript{12}. The output in $\L_{\Pi} \text{X}$ is: \[ \begin{align*} \forall R. A & \Rightarrow \forall R. A \\ \top & \Rightarrow \exists R. \forall R. A \\ \exists R. \top & \Rightarrow \exists R. \exists R. \forall R. A \\ \exists R. \exists R. \forall R. A & \Rightarrow \exists R. \forall R. A \\ \exists R. \top & \Rightarrow \exists R. \exists R. (\neg A) \\ \exists R. \exists R. (\neg A) & \Rightarrow \exists R. \forall R. A \\ \exists R. \forall R. A & \Rightarrow \exists R. \exists R. \forall R. A \\ \exists R. \top & \Rightarrow \exists R. \exists R. (\neg A) \\ \exists R. \forall R. A & \Rightarrow \exists R. \exists R. \forall R. A \\ \end{align*} \] \textsuperscript{4}An undergraduate student working at TecMF/PUC-Rio Lab. (c) Defining Proof Strategies An automated theorem prover would not be efficient or even useful if we cannot provide strategies for deduction rules applications. Moreover, from Section IV.2 we know that SC[]_{AC} deduction rules were designed to be used in a very specific strategy. Maude support two ways to define strategies for rewriting rules application. The first option is the original one, we can use Maude reflection feature to control of rules applications at the metalevel developing a full user-definable internal strategies. The second options is to use the Maude Strategy Language [25]. The strategy language allows the definition of strategy expressions that control the way a term is rewritten. The strategy language was designed to be used at the object level, rather than at the metalevel. There exist a strict separation between the rewrite rules in system modules and the strategy expressions, that are specified in separate strategy modules. Moreover, a strategy is described as an operation that, when applied to a given term, produces a set of terms as a result, given that the process is nondeterministic in general. In the current version of Maude, not all features of the strategy language are available in Core Maude. To be more precise, the Core Maude does not support recursive strategies. Recursion is achieved by giving a name to a strategy expression and using this name in the strategy expression itself or in other related strategies. Given that limitation, we use the prototype strategy language implementation in Full Maude [18]. In our current prototype version we defined the strategy described in Section IV.2 to control SC[]_{AC} rules applications. The basic strategies consist of the application of a rule (identified by the corresponding rule label) to a given term. Strategies operators allow the construction of complex strategy expressions. The strategy expand presented below controls how the rules of SC[]_{AC} ought to be applied. It can be interpreted as: the system must first try to reduce the given term using one of the promotional rules (the union operator is 1). If it is successful, the system must try to further transform the result term using ∀-{1,r}, ∃-{1,r}, □-{1,r}, ▼-{1,r} or ¬-{1,r} (the operator ; the a concatenation). If neither the promotional rules nor the previous mentioned rules could be applied, one of the weak rules should be tried. If none of the previous rules could by applied, the frozen-exchange rule must be tried. ``` (asmod BACKTRACKING-STRAT is strat solve : @ Proof . ``` strat expand : @ Proof . var P : Proof . sd expand := ((try(prom-exist | prom-all); (and-l | and-r | or-l | or-r | forall-l | forall-r | exist-l | exist-r | neg-l | neg-r)) orelse (weak-l | weak-r)) orelse frozen-exchange ) . sd solve := if (match P s.t. (is-solution(P))) then idle else expand ; if (match P s.t. (is-ok(P))) then solve else idle fi fi . endsm) The strategy expand defines how each proof step will be performed. The solve strategy is the complete strategy to construct a proof. It is basically a backtracking procedure, on each step, the system verifies if it has already a solution – using the defined operator is-solution. If the term is not a solution, it executes the expand step and check if the result term is a valid term, that is, a term still useful to reach to a solution – this is done with the operator is-ok. If the term is still valid but not yet a solution it continues recursively. The implementations of is-solution and is-ok were done in a separated module. The operator is-ok evaluates to false whenever we detected a loop in the proof construction. There are different loop situations, below we present one of them, when we have a sequent with two equal sets of frozen formulas (contexts). op is-ok : Proof -> Bool . op is-solution : Proof -> Bool . eq is-solution(P:Proof goals(empty)) = true . eq is-solution(P:Proof) = false [otherwise] . ... eq is-ok(P:Proof [M from N by RL is FALFA1, [X1, X3, FALFA0], [X2, X4, FALFA0] : ALFA |- GAMMA : [X1, X3, FGamma0], [X2, X4, FGamma0], FGamma1]) = false . eq is-ok(P:Proof) = true [owise] . Using the \texttt{slove} strategy defined above, we can prove the subsumption from Equation 1 in SC\textsuperscript{1}\textsubscript{ACC}. We use the strategy aware command \texttt{srew} instead of the \texttt{rew}. In additional, since we are not using Full Maude, the command in Maude prompt is inside parentheses. \begin{verbatim} Maude> (srew th empty : < nil | EXIST(R, CTRUE) & ALL(R, ¬ EXIST(R, ¬ A)) > |- < nil | EXIST(R, ALL(R, A)) > : empty end using solve .) result Proof : goals(empty)next(10) [0 from 0 by 'init is empty : < nil | EXIST(R, CTRUE) & ALL(R, ¬ EXIST(R, ¬ A)) > |- < nil | EXIST(R, ALL(R, A)) > : empty] [1 from 0 by 'and-1 is empty : < nil | ALL(R, ¬ EXIST(R, ¬ A)) >, < nil | EXIST(R, CTRUE) > |- < nil | EXIST(R, ALL(R, A)) > : empty] [2 from 1 by 'forall-1 is empty : < nil | EXIST(R, CTRUE) >, < al(R) | ¬ EXIST(R, ¬ A) > |- < nil | EXIST(R, ALL(R, A)) > : empty] [3 from 2 by 'exist-1 is empty : < al(R) | ¬ EXIST(R, ¬ A) >, < ex(R) | CTRUE > |- < nil | EXIST(R, ALL(R, A)) > : empty] [4 from 3 by 'exist-r is empty : < al(R) | ¬ EXIST(R, ¬ A) >, < ex(R) | CTRUE > |- < ex(R) | ALL(R, A) > : empty] [5 from 4 by 'forall-r is empty : < al(R) | ¬ EXIST(R, ¬ A) >, < ex(R) | CTRUE > |- < ex(R) | al(R) | A > : empty] [6 from 5 by 'neg-1 is empty : < ex(R) | CTRUE > |- < ex(R) | EXIST(R, ¬ A) >, < ex(R) | al(R) | A > : empty] [7 from 6 by 'prom-exist is empty : < nil | CTRUE > |- < nil | EXIST(R, ¬ A) >, < al(R) | A > : empty] [8 from 7 by 'exist-r is empty : < nil | CTRUE > |- < al(R) | A >, < ex(R) | ¬ A > : empty] [9 from 8 by 'neg-r is empty : < nil | CTRUE >, < al(R) | A > |- < al(R) | A > : empty] \end{verbatim}
{"Source-Url": "http://www2.dbd.puc-rio.br/pergamum/tesesabertas/0521487_10_cap_08.pdf", "len_cl100k_base": 9814, "olmocr-version": "0.1.48", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 39339, "total-output-tokens": 10917, "length": "2e13", "weborganizer": {"__label__adult": 0.0003669261932373047, "__label__art_design": 0.0005240440368652344, "__label__crime_law": 0.0003998279571533203, "__label__education_jobs": 0.0011739730834960938, "__label__entertainment": 0.00013506412506103516, "__label__fashion_beauty": 0.00016188621520996094, "__label__finance_business": 0.00029015541076660156, "__label__food_dining": 0.0005645751953125, "__label__games": 0.000941753387451172, "__label__hardware": 0.0008435249328613281, "__label__health": 0.00047659873962402344, "__label__history": 0.0002942085266113281, "__label__home_hobbies": 0.00012409687042236328, "__label__industrial": 0.0006389617919921875, "__label__literature": 0.0008335113525390625, "__label__politics": 0.000354766845703125, "__label__religion": 0.0006389617919921875, "__label__science_tech": 0.0845947265625, "__label__social_life": 0.0001456737518310547, "__label__software": 0.010223388671875, "__label__software_dev": 0.89501953125, "__label__sports_fitness": 0.0002799034118652344, "__label__transportation": 0.0005793571472167969, "__label__travel": 0.00018310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35238, 0.01087]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35238, 0.55715]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35238, 0.84019]], "google_gemma-3-12b-it_contains_pii": [[0, 2037, false], [2037, 4853, null], [4853, 7263, null], [7263, 8762, null], [8762, 10355, null], [10355, 12277, null], [12277, 14294, null], [14294, 16784, null], [16784, 19021, null], [19021, 21181, null], [21181, 23251, null], [23251, 24792, null], [24792, 26898, null], [26898, 29300, null], [29300, 31865, null], [31865, 33435, null], [33435, 35238, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2037, true], [2037, 4853, null], [4853, 7263, null], [7263, 8762, null], [8762, 10355, null], [10355, 12277, null], [12277, 14294, null], [14294, 16784, null], [16784, 19021, null], [19021, 21181, null], [21181, 23251, null], [23251, 24792, null], [24792, 26898, null], [26898, 29300, null], [29300, 31865, null], [31865, 33435, null], [33435, 35238, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35238, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35238, null]], "pdf_page_numbers": [[0, 2037, 1], [2037, 4853, 2], [4853, 7263, 3], [7263, 8762, 4], [8762, 10355, 5], [10355, 12277, 6], [12277, 14294, 7], [14294, 16784, 8], [16784, 19021, 9], [19021, 21181, 10], [21181, 23251, 11], [23251, 24792, 12], [24792, 26898, 13], [26898, 29300, 14], [29300, 31865, 15], [31865, 33435, 16], [33435, 35238, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35238, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
6ba4d0831786834f72381c0d85a1227e7378b12a
Assurance-driven development in Problem Oriented Engineering Jon G. Hall Lucia Rapanotti Centre for Research in Computing The Open University, UK {J.G.Hall,L.Rapanotti}@open.ac.uk Abstract Problem Oriented Engineering (POE) is a Gentzen-style ‘natural’ framework for engineering design. As such, POE supports rather than guides its user as to the particular sequence of design steps that will be used; the sequencing is user determined as that most appropriate to the context of application. In this paper, however, we suggest a sequencing of steps and interactions with stake-holders that is suitable for assurance-driven development, i.e., for developments in which the argument of fitness-for-purpose is produced during design. 1 Introduction By engineering design (shortly, design), we refer to the creative, iterative and often open-ended process of conceiving and developing products, systems and processes (adapted from [EDD]). Engineering design processes by necessity include the identification and clarification of requirements, the understanding and structuring of the context into which the engineered system will be deployed, the specification of a design for a solution that can ensure satisfaction of the requirements in context, and the construction of arguments, convincing for all validating stake-holders, that the engineered system will provide the functionality and qualities that are needed. The involvement of stake-holders motivates the development of an explicit assurance case to collects evidence of the designed artefact’s fitness for purpose. Traditionally, assurance cases have been compiled after the fact: the artefact being designed and then evidence of its fitness-for-purpose collected. The distancing of the artefact and the argument requires higher levels of design expertise and can be more costly as errors are found only late in the process, and has led to calls for evidence is gathered during development, even acting as a driver for the design, what we have termed assurance driven design. In previous work [HMR07, MHR07b, MHR07a, MHR07c], we have shown how the Problem Oriented Engineering (POE) framework, instantiated as Problem Oriented Software Engineering (POSE) [HRJ08], can be used in this role, and describe in [HMR07] how a ‘POSE safety process pattern’ can be defined through which assurance-driven design can proceed. In this paper, we generalise many of the characteristics of that process pattern from its POSE inception to engineering design under POE. As well as allowing assurance-driven design, the generalised POE process pattern has the following additional characteristics: - it supports each of the engineering design process elements described above; - it provides a vehicle for the assurance case driven design, with documentation and analysis of the rationale for decisions; - it allows for the explicit consideration of the risks involved in design; - it allows rich traceability between requirements, domain assumptions and system components; - it is parametrisable for use in diverse engineering domains. The paper is structured as follow. Section 2 provides a brief introduction to POE. Section 3 introduces the POE process pattern. Section 4 presents the case study. Section 5 reflects on what has been achieved in the paper. 2 Problem Oriented Engineering A full presentation of the POE framework is beyond the scope of this paper but can be found, instantiated for software design, in [HRJ08]. POE is a formal system for working with non-formal and formal descriptions. Problem Oriented Engineering (POE) is a Gentzen-style natural framework for engineering design (see, for instance, As such, POE supports rather than guides its user as to the particular sequence of design steps that will be used; the user choosing the sequence of steps that they deem most appropriate to the context of application. The basis of POE is the problem sequent for representing design problems requiring designed solutions. The transformations defined in POE transform problems as sequents into others in ways that preserve solutions (in a sense that will become clear). When we have managed to transform a problem to axioms we have solved the problem, and we will have a designed solution for our efforts. POE is designed to work with problems not propositions as in the original natural deduction: the characteristic that distinguishes it most from natural deduction is the guarding of transformations by justification obligations, the discharge of which establishes the ‘soundness’ of the application with respect to stakeholders. Natural deduction is based on a single absolute notion of correctness provided by proof whereas, through justifications, POE caters for the engineering notion of fitness-for-purpose, something that is often very far from correctness. In the following we recall some of the basic definitions for the framework that will be used during the case study. The interested reader is referred to http://mcs.open.ac.uk/jgh23/ for more detail. ### 2.1 Problems A problem, as defined in POE, has three elements: a real-world context, $W$, a requirement, $R$, and a solution, $S$. The problem context is a collection of domains $(W = D_1, ..., D_n)$ described in terms of their known, or indicative, properties, which interact through their sharing of phenomena (i.e., events, commands, states, etc. [Jac01b]). More precisely, a domain is a set of related phenomena that are usefully treated as a behavioural unit for some purpose. A domain $D(p) = N : E$ has name $(N)$ and description $(E)$, the description indicating the possible values and/or states that the domain’s phenomena can occupy, how those values and states change over time, how phenomena occur, and when. Of the phenomena: $c$ are those controlled by $D$, i.e., visible to, and sharable by, other domains but whose occurrence is controlled by $D$; $o$ are those observed by $D$, i.e., made visible by other domains, whose occurrence is observed by $D$; $p$ are those unshared by $D$, i.e., sharable by no other domain. A problem’s requirement states how a proposed solution description will be assessed as the solution to that problem. Like a domain, a requirement is a named description with phenomena, $R_{cons} = N : E$. A requirement description should always be interpreted in the optative mood, i.e., as expressing a wish. As to the requirement’s phenomena: $cons$ are those constrained by $R$, i.e., whose occurrence is constrained by the requirement, and whose occurrence the solution affects in providing a solution; $refs$ are those referenced by $R$, i.e., whose occurrence is referred to but not constrained by the requirement. A solution is also a domain, $S(p) = N : E$, intended to solve a problem, i.e., when introduced into the problem context will satisfy the problem’s requirement. The possible descriptions of a solution range over many forms, from high-level specification through to detailed designs. As a domain, a solution has controlled, observed and unshared phenomena; the union of the controlled and observed sets is termed the specification phenomena for the problem. A problem’s elements come together in POE in a problem sequent:\[^2]\: $$D_1(p_1)^{c_1}, ..., D_n(p_n)^{c_n}, S(p)^{c} \vdash R_{ref}^{cons}$$ Here $\vdash$ is the problem builder and reminds us that it is the relation of the solution to its context and to the requirements that we seek to explore. By convention, the problem’s solution domain, $S$, is always positioned immediately to the left of the $\vdash$. The descriptions of a problem’s elements may be in any language, different elements being described in different languages, should that be appropriate. So that descriptions in many languages may be used together in the same problem, POE provides a semantic meta-level for the combination of descriptions; notationally, this is a role of the ‘,’ that collects into a problem sequent the domains that appear around the turnstile, formally making each visible to the others.\[^3\] ### 2.2 Problem transformation Problem transformations capture discrete steps in the problem solving process. Many classes of transformations are recognised in POE, reflecting a variety of engineering practices reported in the literature or observed elsewhere. Problem transformations relate a problem and a justification to (a set of) problems. Problem transformations conform to the following general pattern. Suppose we have problems $W,S \vdash R, W_i,S_i \vdash R_i, i = 1, ..., n, (n \geq 0)$ and justification $J$, then we will write: $$W_1, S_1 \vdash R_1 \quad ... \quad W_n, S_n \vdash R_n \quad \vdash [NAME] \quad W, S \vdash R$$ \[^1\]An axiomatic problem is a problem whose known fit-for-purpose solution is known. \[^2\]As here, for brevity, we will sometimes omit the phenomena decorations and descriptions in $W$, $S$ and $R$ whenever they can be inferred by context. \[^3\]A situation similar to that found in the propositional calculus in which conjunction and disjunction, etc., serve to combine the truth values of the atomic propositions. to mean that, derived from an application of the NAME problem transformation schema (discussed below): \[ S \text{ is a solution of } W, \quad S \vdash R \text{ with adequacy argument} \quad (CA_1 \land \ldots \land CA_n) \land J \quad \text{whenever } S_1, \ldots, S_n \text{ are} \] solutions of \( W_1, S_1 \vdash R_1, \ldots, W_n, S_n \vdash R_n \), with adequacy arguments \( CA_1, \ldots, CA_n \), respectively. Engineering design under POE proceeds in a step-wise manner: the initial problem forms the root of a development tree with transformations applied to extend the tree upwards towards its leaves. Branches are completed by problem transformations that leave the empty set of premise problems. 2.3 Assurance-driven Development A problem transformation schema defines a named class of problem transformations, describing the way in which the conclusion problem (that below the line) is related to the premise problem(s) (those above the line). How a problem is transformed is given in a problem transformation schema by pattern matching of the elements of the conclusion problem. The justification obligation is a condition that must be discharged for an application of a schema to be solution-preserving. Each schema has its own general form of justification obligation (explained below). Here is the transformation schema for CONTEXT INTERPRETATION by which the context \( W \) is interpreted as \( W' \): \[ W', S \vdash R' \quad \text{[CONTEXT INTERPRETATION]} \] \[ W, S \vdash R' \quad \text{(Explain and justify the use of } W' \text{ over } W) \] The justification obligation is a condition that must be discharged for an application of a schema to be solution-preserving. Each schema has its own general form of justification obligation; that for CONTEXT INTERPRETATION is shown in the rule. However, the specific form will depend upon the development context as well as other factors. A discharged justification obligation contributes towards the adequacy argument: in assurance-driven design, the needs of an assurance case will be paramount in determining the justifications that should be sought, and so which rules and in which sequence they should be applied. The structure of a justification within assurance-driven development has a special form, reflecting the needs of the assurance case that will be designed alongside the product. Suppose, for instance, we wish to perform the step labelled \texttt{STEP 1D} which transforms the problem \( P \) under the NAME transformation schema, then the justification will typically consist of the following: <table> <thead> <tr> <th><strong>STEP ID:</strong> Application of NAME to problem ( P )</th> </tr> </thead> </table> \[ 4 \text{The premise set will be empty if the problem is axiomatic, as defined in section Section 2.} \] \[ \text{JUSTIFICATION } J: \quad \text{A justification can be named for ease of reference.} \] \[ \text{DESCRIPTIONS & PHENOMENA:} \quad \text{The collection of descriptions and phenomena of the domains and requirements introduced into the problem by the step or the manipulations defined thereon by the step.} \] \[ \text{CONCERN:} \quad \text{Name} \] \[ \text{STATUS:} \quad \text{Status} \] \[ \text{A concern (c.f., [Jac01b]) is something that is important to the development, presumably because it relates to some stake-holder in the process.} \] \[ \text{For instance, the reliability concern is likely to arise: a design that does not address such a concern in such a context is likely to be unverifiable.} \] \[ \text{The status of a concern is one of pending, discharged, undischargeable.} \] \[ \text{The work appertaining to the discharge of a concern is structured: each concern has associated with it the following:} \] \[ \text{CLAIM:} \quad \text{The statement of the claim(s) that will discharge the concern:} \] \[ \text{ARGUMENT & EVIDENCE:} \quad \text{The reason to believe each claim (or the reason it does not hold):} \] \[ \text{RISKS:} \quad \text{A description of the risks involved in continuing the development should the concern fail to be discharged, and/or the secondary risk introduced by the discharge of the concern. A description of the treatment of risks residual to the step.} \] \[ \text{A concern established as part of a step may be addressed (and therefore discharged) in design steps subsequent to that in which it is established, i.e., when, as part of other design steps, evidence in support of its associated claim is discovered.} \] \[ \text{The argument and evidence may, therefore, make reference to other concerns, arguments and evidence in the design tree. The validity concern for a step, that subject to external validation by problem- and solution-owning stake-holders, will typically be required to ensure that relationships between concerns and their discharge are adequate.} \] \[ \text{CONCERN:} \quad \text{Step Validity} \] \[ \text{STATUS:} \quad \text{Status} \] The status of the step validity concern, possible values include pending, signed-off, undischargeable. \[ \text{ARGUMENT & EVIDENCE:} \quad \text{Explanation of the status after validation, including the relationships where evidence was gathered in the design, and the treatment chosen for the residual risk of the step.} \] \[ \text{SIGNATORY:} \quad \text{To recognise the stake-holder or stake-holders that signed-off the step.} \] Each element is optional, typically depending on the developmental stage and context. 3 POE instantiated for safety-critical software development We have already applied POE in support of safety-critical software developments [MHR07b, HMR07]. In those papers our focus was on the evaluation for safety of proposed candidate solution structures (i.e., partial solutions; architectures) early in development. In those papers, we attempted assurance-driven design for the first time and drew conclusions as to sequencing of steps that it required. The result is shown in Fig. 1 presented as a UML activity diagram. The activities in the figure include the following: Context and Requirement Interpretation to capture (increasing) knowledge and detail in the context and requirement of the problem (Activity 1; Context Interpretation was defined in Section 2.3); Solution Interpretation and Expansion to structure the solution (or part thereof) according to a candidate architecture (Activity 2); Preliminary safety analysis (PSA) for early assessment of a candidate architecture (Activity 3). Although of no further concern to us in this paper, the techniques chosen for application during the PSA depend on the level of criticality of the system under design and may include Functional Failure Analysis (FFA) [SAE96], functional Fault Tree Analysis (FTA) [VGRH81], or the use of fully formal specification languages and logical proof (for instance, [Jac01a], as used in [MHR07b]). The level of criticality is determined by whether the system is safety critical (highest integrity required) or safety related (high integrity, but not as high as safety critical). 3.1 The choice point The choice point (labelled 4) in the figure depends on the outcome of the PSA, which determines whether the current candidate architecture is viable as the basis of a solution or whether, instead, we should backtrack the development to find another candidate solution or explore the problem further. In POE terms, choice point 4 needs to be made in the solution domain—it is a choice regarding the suitability of a solution architecture in a particular problem context—and so falls within the remit of a solution-owning stake-holder (a description of which will be given later). The artefacts upon which the choice is based are the, perhaps incomplete, solution against which the PSA was run. We have observed that it is not necessarily that case that a complete solution exists when the PSA is completed—one may, for instance, have only have chosen a solution architecture that is hoped to form the basis of a solution. The nature of the choice is then something like “Is there good reason to believe that a solution can exist based on this architecture?” As such, it is clear that the decision made need to be revisited later during development. Figure 1. POSE Safety Process Pattern: to move towards the solution of a safety-critical problem, we first understand the problem better (Activity 1), use engineering judgement to determine a candidate solution architecture (Activity 2), then test the candidate for satisfaction of safety concerns, iterating if necessary. 3.2 Abstracting the POSE safety process pattern for general engineering use Although useful in the safety-critical software arena, the POSE safety pattern does not consider the needs of validation in the problem space, nor the roles of those who will perform that validation. In the new process illustrated in Figure 2, three areas are distinguished, the various activities are renamed, and one new activity and one new choice are added. The roles are our names for those whose role places them at the centre (the problem solver) or on the periphery (the validating stake-holders) of problem solving, described in more detail below. The activities are Partial Candidate Problem Exploration (renamed from Context and Requirement Interpretation in Figure 1); Partial Candidate Problem Validation, a new choice point, added (see below); (Partial) Candidate Solution Exploration; and Partial Candidate Solution Validation (again, see below). The partial nature of the candidates is so that early problem solving can focus on parts of the problem or solution, rather than the whole problem straight away. The relationship between the activities is shown in Figure 2. In the figure, there are roles of problem owning stake-holder(s), solution owning stake-holder(s), and problem solver, their respective scopes indicated by shading. A **problem owning stake-holder** is someone who role is to validate a (partial) candidate problem description that results from Partial Candidate Problem Exploration. It is important to note that the roles, as such, do not overlap. There are many familiar examples of problem owning stake-holders. These include, but are not limited to, those of customer (those that pay for a product), clients (those that pay for a service), regulator (those requiring safety, for instance), end-user (those who will use the product or service when commissioned). It is the problem owning stake-holders’ role to answer the question “Is this (partial) problem description valid for you?” Depending on the problem-owning stake-holders’ responses, the problem solver may need to re-explore the problem (when the answer is "No!") or move on to try to find a (partial) solution (when the answer is “Yes”). The role of the solution owning stake-holder(s) is to validate a candidate solution description, such as an architecture (a partial solution) or choice of component (i.e., something of complete functionality). The roles of solution owning stake-holders may be less familiar to the reader. They include, but are not limited to, a development house’s chief software architect—who knows which architectures their organisation uses in solutions, an oracle—who determines which of a number of features should be included in the next release, or a project manager—who needs to timebox particular activities; there are many other roles that fit solution owning stake-holder. It is the solution owning stake-holders’ role to answer the question “Is this (partial) solution description valid?” Depending on their response, the problem solver may need to re-explore the solution (when the answer is “No!”), move back to exploring this or a previous problem (when the answer is “No, but it throws new light on the problem!”), or moving on to the next problem stage (when the answer is “Yes!”). The role of problem solver is that of the person or persons that begins by trying to understand the problem and iterates towards a solution. As indicated by the upward pointing arrow that appears in the upper right of Figure 2, iteration is not always local: it is, for instance, possible that through the failed validation of a solution a previous problem description may be revealed as flawed, even if it has been validated by a problem-owning stake-holder and so invalid—problem-owning stake-holders make mistakes too! It is worth emphasising that we do not preclude communication between those that will perform the role of problem- or solution-owning stake-holder, or problem solver during the process of problem solving. Indeed, this would be a very sensible option—even if just to manage the expectations of the various stake-holders before the formal validation is conducted. ![Figure 2. POE process Pattern: to move towards a partial solution to a general engineering problem, we first understand the problem better (1), reflecting our understanding of the problem through validation with the problem holding stake-holder (2); use engineering judgement to determine a candidate solution architecture (3), then test the candidate for satisfaction of safety concerns, iterating if necessary (4).](image) ### 3.3 Doing engineering design Previously, we have focused on safety critical development in POSE, whence the justification obligation must satisfy the interested stake-holders that their concerns (similar in nature to those considered in [Jac01b]) about safety are discharged. In this paper, we map the same case study to the POE pattern, using to it for an opportunity to explain the various roles and artefacts. This will involve us in considering (and reconsidering) in detail the various roles. The justification obligations for the schemata underlying these exploration phases generate concerns that should be discharged as part of problem solving. A concern leads to a claim stated within a justification, the claim being that the concern is discharged by the development step. The justification will, eventually, contain arguments and evidence that the claim is valid so that the concerns is discharged. We say eventually because some concerns can only be discharged after the ramifications of a problem transformation are known which is, typically, later in the development tree. One particularly important concern is the step validity concern—for which the associated claim is a particular step is validatable—as it is the point of contact of the POE process with stake-holders external to the creative process of the problem solver; in particular, the problem and solution owning stake-holders. The step validity concern associated with a problem exploration step is dischargeable only with reference to the problem-owning stake-holder. The step validity concern associated with a solution exploration step... is dischargeable only with reference to the solution-owning stake-holder. It is the discharge of step validity concerns that require the problem solver to consult with stake-holders (although, of course, consultation with stake-holders may also take place in problem and/or solution exploration). On the other hand, like other concerns, the discharge of step validity concerns may be postponed. Depending on the criticality of a development, the risk exposed by such a postponement may be unacceptable—given that a problem- or solution-owning stake-holder has not validated a partial problem or solution candidate, the problem solver may be solving the wrong problem with incorrect solution technologies, or both. In this case, the future development is based on an assumption of validity. The commitment of developmental resources on this assumption is the source of the risk, although it may be more or less mitigated by problem solver experience. Of course, even if the risk is managed by discharging the step validity concern, there may be secondary risks, such as the a problem-owning stake-holder being incorrect in their validation. It may therefore be important, as part of the justification for the development step to record the explicit instance of step validity concern discharge so that it is traceable; the recording of concern discharges are properly a part of all POE steps. 4 Case study The case study is a real development, performed by the authors and Derek Manering of General Dynamics UK Ltd, based on systems flying in real aircraft. The case study is abbreviated only in the sense that some detail has been removed: it retains all essential complexity: more detail, and its original context, can be found in [MHR07b, HMR07, MHR07a, MHR07c]. It concerns the development of the Decoy Controller component of a defensive aids system whose role is to control the release of decoy flares providing defence against incoming missile attack. In POE, to record that we have something that is deserving of the resources that will be used in solving a problem we give a marker for the start of the problem solving process: all problem solving starts from the null problem—the problem of which we know nothing other than its existence: \[ P_{null} : \ W : null, S : null \vdash R : null \] null is used as the description for \( W, R \) and \( S \) to indicate that nothing is known about them. Moving from the null problem to that of the case study is a first problem exploration step. The details of the problem exploration follow. 4.1 Initial Problem Exploration During a problem exploration, the problem solver will work he following problem: \[ \begin{align*} \text{Defence System}^{\text{on}}, \text{Dispenser Unit}^{\text{out}}_{\text{fire, sel}}, \\ \text{Aircraft Status System}^{\text{air}}, \\ P_1 : \text{Pilot}^{\text{on}}, \text{Decoy Controller}^{\text{con, out, air, ok}}_{\text{fire, sel}} \\ \vdash P_{\text{con, out, air, ok}}^{\text{fire, sel}} \end{align*} \] The justification obligation for an interpretation schema application requires us to justify a newly provided description over the existing one. Here is the (collated) justification for all interpretation transformations from \( P_{null} \) to \( P_1 \) which add knowledge of the problem and its parts. ### STEP 1: Application of Context and Requirement Interpretation to problem \( P_{null} \) **JUSTIFICATION \( J_1 \):** The identified requirement, domains and their relevant properties are summarised below: <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Defence System</td> <td>The computer responsible for controlling and orchestrating all defensive aids on the aircraft</td> </tr> <tr> <td>Dispenser Unit</td> <td>Mechanical device for releasing decoy flares used as defence against incoming missile attack. It has number of different flare types, and includes a safety pin that, when in place, prevents flares from being released</td> </tr> <tr> <td>Aircraft Status</td> <td>The system which monitors the status of certain key aircraft parameters, including whether the aircraft is in the air</td> </tr> <tr> <td>System</td> <td></td> </tr> <tr> <td>Pilot</td> <td>The pilot, who can signal the controller that flare release should be allowed</td> </tr> <tr> <td>Decoy Controller</td> <td>( R )</td> </tr> <tr> <td>R</td> <td>The conjunction of: ( R_1 ): On receiving a \text{con} command from \text{Defence System}, \text{Decoy Controller} shall obtain the selected flare type information from the relevant field in \text{con}, for use in its \text{sel} message to the \text{Dispenser Unit} to control flare selection. ( R_2 ): \text{Decoy Controller} shall issue a \text{fire} command only on receiving a \text{con} command from \text{Defence System}. This shall be the only way in which a flare can be released. ( R_3 ): \text{Decoy Controller} shall cause a flare to be released by issuing a \text{fire} command to the \text{Dispenser Unit}, which will fire the selected flare.</td> </tr> </tbody> </table> continued Aircraft Status: Pilot Intention: Pin Status: Command to release the selected flare type Command to select and release a flare type Command to select flare type PHENOMENA: Phenomena and their control and sharing (see $P_1$) are known from the existing system components as: <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>fire</td> <td>Command to release the selected flare type</td> </tr> <tr> <td>sel</td> <td>Command to select flare type</td> </tr> <tr> <td>out</td> <td>Pin status: $out = yes$ when pin removed</td> </tr> <tr> <td>con</td> <td>Command to select and release a flare type</td> </tr> <tr> <td>air</td> <td>Aircraft status: $air = yes$ when aircraft airborne</td> </tr> <tr> <td>ok</td> <td>Pilot intention: $ok = yes$ then allow release</td> </tr> </tbody> </table> CONCERN: Interpretation validity STATUS: Discharged CLAIM: The interpretations are valid ARGUMENT & EVIDENCE: The choice of domains follows from the aircraft level safety analysis and the required choice of interlocks. The Defence System, Dispenser Unit, Aircraft Status System are existing components of the avionics system, with well-known properties (that could be validated through direct inspection). The Pilot is trained to follow protocol rigorously. The customer requirement was provided as an input to the developer team. Hazard $H_1$ and $H_2$ came from an aircraft level safety analysis which allocated safety requirements to the main aircraft systems, including the Decoy Controller. Hazards $H_1$ and $H_2$ have both systematic (safety related) and probabilistic components. To counter these hazards, the following safety interlocks were required as input to the Decoy Controller to provide safety protection: an input from the pilot indicating whether the release should be allowed; an input indicating whether the aircraft is in the air; and an input indicating whether the safety pin, present when the aircraft is on the ground, is in place. The expected behaviour is that flare release should be inhibited if any of the following conditions hold: a) the pilot disallows flares; b) the aircraft is not in the air; or c) the safety pin has not been removed. These interlocks provide extra assurance for hazard $H_1$, but not for $H_2$. Therefore, the safety task is to demonstrate that $H_2$ can be satisfied, with the knowledge that if $H_2$ can be satisfied, then so can $H_1$. Of course, the descriptions at which we have arrived through the problem exploration step have not been arrived at in a vacuum: as shown in the argument and evidence supporting a claim of step validity, they were arrived at only after careful work predicated on discussion with the customer and reference to best practice. The step validity concern should, then, be easy to discharge by appeal the problem-owning stake-holder (in this case the customer for the system), and in a real development this should be done unless the risk of not doing it acceptable. So that we can progress towards solution exploration, we will assume that the validity concern is discharged in this case so that we may write: **STEP 1: Sign-off of Context and Requirement Interpretation to problem** $P_{null}$ CONCERN: Step Validity STATUS: Signed-off DETAILS: The descriptions used were arrived at after a successful bid to tender, when the mechanical outline, approximate weight and power envelope of the system were established. Subsequent communications with the customer were used to clarify the requirements and properties of the system environment. The remainder of the system was designed in response to the post bid revised customer requirements including their allocation to software and hardware as appropriate. SIGNATORY: Customer In general, recording who, where and when the validity concern was discharged would also be sensible as would authentication of—perhaps a signature—of the validator, for traceability reasons. 4.2 Solution Interpretation and Expansion Given our validated problem statement, we may move towards exploration of the solution. An AStruct (short for Architectural Structure) is used to add structure to a solution domain, through an application of SOLUTION INTERPRETATION. An AStruct combines, in a given topology, a number of known solution components\(^5\) \(^5\)There are also constraints on the phenomena sets, which we omit here for brevity; the reader is referred to [HRJ07] for the full definition. Collects together the interlock inputs and Command to release the selected flare type \[ A\text{StructName}_i[S_1, \ldots, S_n] \] with \( A\text{StructName} \) the AStruct name. Once the solution is interpreted by providing and justifying an AStruct, SOLUTION EXPANSION generates premise problems by moving the already known components \( C_i \) to the environment—expanding the problem context—whilst simultaneously re-focussing the problem to be that of finding the solution components \( S_j \) that remain to be designed. The requirement and context of the original problem is propagated to all sub-problems. A particular case, which is relevant to our case study, is when there is only one component to be found, that is, the AStruct has the following form: \[ A\text{StructName}_i[C_1, \ldots, C_m](S) \] In this case expansion only generates one premise problem as follows: \[ \forall W. \; C_1, \ldots, C_m, S : \text{null} \vdash \mathcal{R} \quad \text{[SOLUTION EXPANSION]} \] In the case study, the following AStruct encodes the initial candidate architecture chosen for the Decoy Controller: \[ \text{DecoyContAS}[\text{II}^{\text{int}}, \text{DM}^{\text{ext,fire?}}](\text{Safety Controller}^{\text{fire,sel}}) \] which includes two extant components, II and DM and one to be found component Safety Controller. Therefore, a subsequent expansion leads to problem: \[ \text{Defence System}^{\text{con}}, \text{Dispenser Unit}^{\text{air,sel}}, \text{Aircraft Status System}^{\text{air}}, \text{Pilot}^{\text{ok}}, P : \] \[ \text{II}^{\text{int}}, \text{DM}^{\text{ext,fire?}}, \text{Safety Controller}^{\text{fire,sel}} \] \[ \vdash \mathcal{R}^{\text{fire,fire?, sel}} \] Here is the combined development step: **STEP 2: Application of Solution Interpretation and Expansion to problem \( P_1 \)** **JUSTIFICATION \( J_2 \):** The identified architecture, its components and relevant properties are summarised in the table below: <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Decoy Cont.</td> <td>DecoyContAS[\text{II}^{\text{int}}, \text{DM}^{\text{ext,fire?}}](\text{Safety Controller}^{\text{fire,sel}})</td> </tr> </tbody> </table> **PHENOMENA:** The new phenomena introduced by the architecture are: <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>fire?</td> <td>Command to release the selected flare type</td> </tr> <tr> <td>int</td> <td>Status of combined interlocks</td> </tr> </tbody> </table> **CONCERN:** Sound engineering **STATUS:** Discharged **CLAIM:** The choice of candidate solution architecture exhibits sound safety engineering judgement **ARGUMENT & EVIDENCE:** The architecture is chosen to minimise the number and extent of the safety related functions, localising them to simple, distinct blocks in accordance with best practice. **CONCERN:** Candidate solution validity **STATUS:** Pending **CLAIM:** The chosen solution architecture does not prevent the satisfaction of \( R \). We note that, due to the fact that the solution exploration is incomplete as yet, the pending validity concern will not need to be discharged until after the feasibility concern. Of course, discussions that might arise due to addressing the validity concern may inform the PSA; however, there is no risk associated with the validity concern until a point when the decision to commit further resources to the development is required; the first point at which this holds is after we know whether the current architecture is the basis of a technically feasible solution or not. 4.3 Preliminary Safety Analysis The justification of the previous transformation step is incomplete: the feasibility concern remains to be discharged. The related claim is that the chosen architecture candidate should not prevent an adequately safe solution and yet, as we shall argue, it does prevent an adequately safe solution. In the worst case, to continue the design without checking feasibility unearths the risk that the final product cannot be argued safe. Traditionally, such risks are mitigated through over-engineering of the solution, but this typically adds to the development cost. Here, the risk is managed through a Preliminary Safety Analysis (PSA), eagerly applied in the attempt to discharge the feasibility concern. The goal of a PSA is to: (a) confirm the relevance of hazards allocated by the system level hazard analysis; (b) identify any further hazards to be added to the list; and (c) validate the architecture against the safety targets associated with the identified relevant hazards. Many techniques can be applied to perform a PSA. In [MHR07c] we used a combination of mathematical proof, Functional Failure Analysis (FFA) [SAE96] and functional Fault Tree Analysis (FTA) [VGRH81]. Note that PSA is not a POSE transformation per se (no POSE schema defines a PSA). Instead it is a technique which we use to discharge one of the concerns in the justification obligation for Solution Interpretation. <table> <thead> <tr> <th>Step 2: Application of Solution Interpretation and Expansion to problem $P_1$ (cont’d)</th> </tr> </thead> <tbody> <tr> <td><strong>Concern:</strong> Candidate solution validity</td> </tr> <tr> <td><strong>Status:</strong> Undischargeable</td> </tr> <tr> <td><strong>Claim:</strong> The chosen solution architecture does not prevent the satisfaction of $R$. This claim does not hold.</td> </tr> </tbody> </table> | **Argument & Evidence:** We applied FFA to each architectural component in turn. The significant results from applying FFA to the DM are shown in Table 1, where three problem cases were identified: F2, F3 and F5, with ‘Yes’ in the Hazard column. A functional FTA applied to DM and using the three FFA problem cases F2, F3 and F5, indicates that a failure in uP (systematic or probabilistic) could result in the fire? failing on. The Pilot’s allow input provides some mitigation, but as soon as this is set (ok = yes) a flume will be released, which is undesirable behaviour. In other words, with this architecture, $H_2$ is only protected by the Pilot’s allow input. If fire? failed on, then as soon as the Pilot indicated an intention to allow flume release, by selecting the switch, then the flume would be released, which is not the design intention. Therefore the safety analysis indicates that fire? needs to have a safety involved (not critical) integrity. This can only be achieved with the existing design by upgrading all of the design to be safety involved. That is, by assigning fire? to the uP, we require that all uP functionality must be of fire? required safety integrity, including much of the uPs functionality (timing, BIT, etc.) that is not safety-related. Further, any updates to the uP software have to satisfy the safety involved integrity. To make the uP safety-involved is not possible. The conclusion of the PSA is that the selected DM component, hence the architecture, is not a suitable basis for the design—no adequate solution can be derived from its parametrisation, hence the feasibility concern cannot be discharged. <table> <thead> <tr> <th>Table 1. FFA Summary for Safety Controller</th> </tr> </thead> <tbody> <tr> <td><strong>Id</strong></td> </tr> <tr> <td>F1</td> </tr> <tr> <td>F2</td> </tr> <tr> <td>F3</td> </tr> <tr> <td>F4</td> </tr> <tr> <td>F5</td> </tr> </tbody> </table> As there is a concern that is undischargeable, including the step validity is not appropriate. 4.4 Backtracking the development The failed PSA causes the iteration of the POSE safety process, i.e., the development is backtracked to $P_1$ and a second candidate architecture chosen, informed by what we learned from the failed feasibility claim. The second iteration of the POSE process is similar to the first: although there is new information associated with the revised architecture, the remainder of the transformations may be carried across from the first iteration without change, simplifying this second (and any subsequent) iteration. The second candidate architecture differs from the original in that we replace DM with higher integrity component DM’. Here is the development step: <table> <thead> <tr> <th>Step 2.1: Re-application of Solution Interpretation and Expansion to $P_1$</th> </tr> </thead> <tbody> <tr> <td><strong>Justification $J_2$:</strong> The newly identified architecture, its components and relevant properties are summarised below (where they differ from $J_2$):</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Decoy Controller DecoyContAS[Interrupt, DMContfire]</td> <td>(Safety Controller fireContfire)</td> </tr> </tbody> </table> continued A microcontroller used to decode con messages from Defence System and when appropriate issue a fire command request, fire?, to the Safety Controller. In the schematic: the message buffer MB holds the received control message con; the micro-controller \( uP \) decodes it to extract the selected flare type (leading to \( \text{sel} \)); the FPGA (a Field-Programmable Gate Array, \([\text{HH05}]\)) component decodes it to extract a fire command request (leading to fire?). <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>DM(^{i})</td> <td>A microcontroller used to decode (\text{con}) messages from Defence System and when appropriate issue a fire command request, fire?, to the Safety Controller.</td> </tr> </tbody> </table> **Concern:** Sound engineering Status: Discharged **Claim:** The choice of candidate solution architecture exhibits sound safety engineering judgement **Argument & Evidence:** The chosen architecture is similar to the previous one (see J\(_2\)) except that as a result of the PSA we require the fire? signal to be safety involved (but not safety critical) so that to allow the overall architecture to satisfy its safety target. We do this by taking the safety involved functions out of the \( uP \) component and route them through a separate high integrity path. Details omitted for brevity. **Concern:** Candidate solution validity Status: Discharged **Claim:** The chosen solution architecture does not prevent the satisfaction of \( R \). **Argument & Evidence:** Omitted, for brevity As a result of this step, we arrive at: \[ P_{2}^{i} : \quad \text{Defence System}^{\text{con}}, \text{Dispenser} \text{Unit}_{\text{fire, sel}}, \text{Airplane} \text{Status System}_{\text{air}, \text{Pilot}_{k}}, \ \begin{align*} \text{FPGA} & \rightarrow \text{MB} \rightarrow \text{DS}_{[\text{con}]} \end{align*} \\ & \downarrow \text{DM}^{i}_{[\text{fire, sel}]} \rightarrow \text{uP} \end{align*} \] **Concern:** Step Validity **Status:** Signed-off **Argument & Evidence:** The current solution is not computationally complete so no testing of a computationally complete product was possible. However, it is the assessment of the safety authority that the reasoning underlying the PSA that justifies this choice of architecture is valid and sound. **Signatory:** Safety Authority Note that we do not yet have a working solution; rather we have an architecture for a solution consisting of a microprocessor, an FPGA and a Message Buffer. The discharge of the validity concern does not remove all risk in proceeding with the development. For instance, the risk that the development will again need to be backtracked to find a third candidate solution remains. However, there risk that the solution-owning stake-holder will not sign off a solution based on this architecture has been shared (or transferred) to that solution-owning stake-holder. ## 5 Discussion The POE notion of problem requires a separation of context, requirement and solution, with explicit descriptions of what is given, what is required and what is designed. This improves the traceability of artefacts and their relation, as well as exposing all assumptions to scrutiny and validation. That all descriptions are generated through problem transformation forces the inclusion of an explicit justification that such assumptions are realistic and reasonable. In particular, safety requirements are justified as valid, are fully traceable with respect to the designed system, and evidence of their satisfaction is provided by the adequacy argument of a completed POE development tree. We have shown (a) how (partial) problem and solution validation are used to manage developmental risk and (b) have shown how an assurance case can be constructed alongside the development of a product. That product and assurance argument development are co-designed is a fundamental possibility under POE: no transformation should occur without appropriate justification (although such justification may not be immediately available, requiring some exploratory development to be done first). On the other hand, development risks can be taken by tentative transformation which are not completely justified: in such cases concerns can be stated as suspended justification obligations to be discharged later on in the process. This adds the flexibility of trying out solutions, while still retaining the rigour of development and clearly identifying points where backtracking may occur. Finally, POE defines a clear formal structure in which the various elements of evidence fit, that is whether they are associated with the distinguished parts of a development problem or the justifications of the transformation applied to solve it. This provides a fundamental clarification of the type of evidence provided and reasoning applied. Moreover, that the form of justification is not prescribed under POE signifies that all required forms of reasoning can be accommodated, from deductive to judgemental, within a single development. Acknowledgments We acknowledge the financial support of IBM and of SE Validation Limited, in particular Colin Brain for his many comments and insights. We also thank Derek Mannering whose work first instantiated the POE process pattern, and our colleagues in the Centre for Research in Computing at The Open University, particularly Michael Jackson. References
{"Source-Url": "https://oro.open.ac.uk/90212/1/TR2007_15.pdf", "len_cl100k_base": 10561, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42978, "total-output-tokens": 11812, "length": "2e13", "weborganizer": {"__label__adult": 0.00039315223693847656, "__label__art_design": 0.0007786750793457031, "__label__crime_law": 0.0003266334533691406, "__label__education_jobs": 0.0013608932495117188, "__label__entertainment": 6.395578384399414e-05, "__label__fashion_beauty": 0.0002129077911376953, "__label__finance_business": 0.0003650188446044922, "__label__food_dining": 0.00034499168395996094, "__label__games": 0.0007653236389160156, "__label__hardware": 0.001735687255859375, "__label__health": 0.0004343986511230469, "__label__history": 0.0004246234893798828, "__label__home_hobbies": 0.0001590251922607422, "__label__industrial": 0.0008711814880371094, "__label__literature": 0.000347137451171875, "__label__politics": 0.00023043155670166016, "__label__religion": 0.00057220458984375, "__label__science_tech": 0.04644775390625, "__label__social_life": 9.250640869140624e-05, "__label__software": 0.00601959228515625, "__label__software_dev": 0.9365234375, "__label__sports_fitness": 0.00033545494079589844, "__label__transportation": 0.0011205673217773438, "__label__travel": 0.00023365020751953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50931, 0.00916]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50931, 0.34945]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50931, 0.91361]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 3673, false], [3673, 9123, null], [9123, 14597, null], [14597, 19006, null], [19006, 23917, null], [23917, 29558, null], [29558, 33835, null], [33835, 37457, null], [37457, 42495, null], [42495, 46662, null], [46662, 50931, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 3673, true], [3673, 9123, null], [9123, 14597, null], [14597, 19006, null], [19006, 23917, null], [23917, 29558, null], [29558, 33835, null], [33835, 37457, null], [37457, 42495, null], [42495, 46662, null], [46662, 50931, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50931, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50931, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 3673, 3], [3673, 9123, 4], [9123, 14597, 5], [14597, 19006, 6], [19006, 23917, 7], [23917, 29558, 8], [29558, 33835, 9], [33835, 37457, 10], [37457, 42495, 11], [42495, 46662, 12], [46662, 50931, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50931, 0.17712]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
319591db62adcbf80a8d4a0a42b9db6087e83f3d
Parallel and Distributed Stream Processing: Systems Classification and Specific Issues Roland Kotto-Kombi, Nicolas Lumineau, Philippe Lamarre, Yves Caniou To cite this version: Roland Kotto-Kombi, Nicolas Lumineau, Philippe Lamarre, Yves Caniou. Parallel and Distributed Stream Processing: Systems Classification and Specific Issues. 2015. hal-01215287 HAL Id: hal-01215287 https://hal.archives-ouvertes.fr/hal-01215287 Preprint submitted on 13 Oct 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract. Deploying an infrastructure to execute queries on distributed data streams sources requires to identify a scalable and robust solution able to provide results which can be qualified. Last decade, different Data Stream Management Systems have been designed by exploiting new paradigm and technologies to improve performances of solutions facing specific features of data streams and their growing number. However, some tradeoffs are often achieved between performance of the processing, resources consumption and quality of results. This survey suggests an overview of existing solutions among distributed and parallel systems classified according to criteria able to allow readers to efficiently identify relevant existing Distributed Stream Management Systems according to their needs ans resources. Keywords: Distributed Stream Management Systems, Workflow, Map Reduce 1 Introduction With the multiplication of data streams sources (sensor networks, connected devices...), stream analysis applications have known great evolution on last years. The treatment of data stream that is to request or to analyze data represents a challenge in terms of performance, scalability, robustness and results quality. Unlike disk-based data, data streams are potentially infinite and some of their features are unpredictable like items distribution and throughput. To request data streams, the solutions[21, 18] based on centralized DBMS proves to be limited and irrelevant[22, 2]. Thus, some Data Stream Management Systems (DSMS) were designed, like STREAM[4], Aurora[2] or TelegraphCQ[10], to compute queries on data streams, called continuous queries. These queries may be represented as dataflow diagrams where streams runs between operators to deliver results to end-users with low latency. Concerning the deployment of DSMS, they turn from centralized but multi-core systems running on --- This work has been partially supported by French National Agency for Research (ANR), project SOCIOPlug (ANR-13-INFR-0003). a single machine [2] to a distributed infrastructure like Grid or Cloud [1, 20]. It is worth noting that all systems we consider in this paper are able to execute multiple queries simultaneously and exploit on or more type of parallelism described in the course of this article. With the apparition of MapReduce framework[12], a different way of distributing operators is developed. MapReduce has the advantage to provide a highly parallel programming paradigm and to remain simple. Operators are simple to define and parallelism management is hidden to users. In that context, some DSMS based on MapReduce framework have been developed like SparkStreaming[24], C-MR[8], M3[3], and provide different visions on stream processing. This survey aims at exploring the various issues faced by existing Data Stream Management Systems, at bringing a more precise view of continuous query processing and at facilitating the comparison of the different known solutions. In this article, we present an original and up-to-date classification of parallel DSMS. This classification is not only based on different paradigms on which DSMS have been defined these last years, but also considering some aspects related to systems capacities to optimize data computations by reusing intermediate results and to be deploy in a distributed way. These inherent DSMS features are completed by considering complementary aspects related to resource consumption, robustness and quality results. The remainder of this paper is built as follow: in Section 2, the background exposes stream definitions and how they can be processed with workflows. The MapReduce framework is also reminded. Next, Section 3 presents the related work about existing surveys and their limits. Our classification of existing DSMS is proposed in Section 4. Section 5 proposes a transversal point of view of DSMS on aspects related to the resource consumption, the robustness and the quality of results, before concluding into Section 6. 2 Background To understand the specific requirements of parallel and distributed stream processing, we remind readers about some basic notions like stream and window definitions. Next we briefly remind some features about operators and query language, before considering the two main paradigm relevant to improve stream processing performance: workflow and MapReduce. 2.1 Stream We remind the stream definition given in [5]. **Definition 1.** (Stream) Let consider a schema $S$, composed by attributes which describe data, and an ordered timestamp set $\tau$. A stream is a potentially infinite multiset of elements $<s_i, \tau_i>$ where $s_i$ is a tuple of the stream respecting the schema $S$ and $\tau_i \in \tau$ the associated timestamp. It is important to notice that the timestamp is not included in the schema so many stream elements can share the same timestamp and the number of elements sharing a same timestamp must be finite. Data streams can not be managed and processed like static data because of specific features like unpredictability. For more details, see former surveys [7, 22]. Actually, they are potentially infinite bag of data. Data streams can not be stored on disk for further treatments because they may require unbounded amount of memory. From a practical point of view, data streams can not be gathered completely before processing them. Moreover, they are unpredictable considering input variations. Items may arrive by millions per seconds like click logs for website monitoring applications. Stream rate variations may happen any-time and require more resources during runtime. Finally, systems can not anticipate arrivals of data, as they arrive continuously, not at regular intervals. ### 2.2 Window Because data streams can not be stored on memory, an alternative is to consider only recent data, called *computation window*, (or just window [7]) to get a result based on a data subset. **Definition 2.** (Computation Window) A computation window is a logic stream discretization [24] which is defined by a size and a slide (see Figure 1a). Considering the front of a window and the chronological order, the size defines the timestamp interval of elements to consider. The slide defines the step between two consecutive window fronts. According to Definition 2, a window is denoted *tumbling* window as found for instance in [13, 2] if size and slide values are the same. In other cases, it is denoted a *sliding* window as we may find in [8, 24]. ![Image](a Sliding window - b Window division in multiple panes) **Fig. 1: Computation window representation** For both types, the size and slide for window can be based on two different units of measurement: time-based and count-based approaches. Definition 3. (Time-based window) A time-based window is defined on a time interval $t_0$ to $t_n$. A stream element $e$ belongs to the window if $\tau_e \in [t_0, t_n]$ with $\tau_e$ the timestamp of $e$. In Figure 1a, the first time interval, represented by iteration 1, allows to gather 20 first elements (5+3+2+2+3+5) together and iteration 2 allows to gather 30 elements (5+6+6+6+7) together. It is worth noting that considering the current window slide and window size, five elements are common for the both iterations. Definition 4. (Count-based window) A count-based window is defined according to a number of elements $k$. This value $k$ corresponds to the window size. The $k$-th most recent elements belong to the current iteration. The slide defines the number of new elements to receive before computing a new result on the updated window content. As highlighted in [5], count-based windows are not deterministic when considering multiple elements per timestamp stream. Indeed, let consider a count-based window which considers the last one hundred stream elements. If more than one hundred elements arrive in the system at a same timestamp the window definition is not deterministic anymore because it is impossible to determine which elements to consider for computation. According to window definition (see Definition 2), we accept that consecutive iterations can share elements when the slide is smaller than the size. These shared subwindows, denoted panes, represent a logic stream discretization as illustrated on Figure 1b. Considering that window size and slide are fixed for all the computation, the size of a pane is mostly given by the greatest common divisor between the size and the slide of a given windowing schema. As mentioned above, a pane can be shared by many consecutive iterations. This means that the result of a query on a pane can be useful for all iterations it belongs. It is then a crucial feature to consider for incremental computation of sliding windows for mutualization of pane results. 2.3 Stateless or stateful operators Operators (filter, join, sort . . .) applied on data streams may require to treat element by element or by logical block of elements. Thus, we distinguish two main categories of operators : stateless and stateful operators. Stateless operators, for example filters based on an attribute value, processed data streams element by element. They return a new result with an unpredictable frequency. For example, a filter will return nothing if its input does not satisfy the filtering predicate. Moreover, they do not have information about the current iteration of the computation window or former results when they compute a result from a data stream element. Nevertheless, a stateless operator may used historic data stored on local memory or disk. It allows to compute joins or some aggregate operators within a stateless operator. In opposition, stateful operators take as input a logic block of stream elements to compute a single result for the entire set. The stream element block may be defined by a range of timestamps or a number of elements to consider. They keep information like the current iteration of the window or the list of distinct element values to process a result from all considered elements. Information generated during a stateful operator runtime is denoted its state. For example, a window-based stateful operator computing the sum of an element attribute for each computation window. Its state contains the identifier of the current iteration, the definition elements to consider for each input block and the current sum value. It is important to notice that the definition of the block does not necessary match with the window definition. The aim is to be able to build the window result directly from block results. This major distinction between stateless and stateful operators needs to be considered in the definition of a continuous query language. 2.4 Continuous query languages We introduce three levels of continuous query languages. These languages provide tools to process data stream through different interfaces and with different language expressiveness. **CQL and SQL-derived languages.** The first level of continuous query languages is based on SQL. Actually, languages like CQL[5], are built over SQL. They add to basic SQL implementation, the support of computation windows, described with more details in Section 2.2. In addition, they provide operators returning a stream from data streams or static data. For example, CQL considers three classes of operators: Stream-to-Relation, Relation-to-Relation and Relation-to-Stream. Stream-to-Relation refers to computation window definition as illustrated in Section 2.2. These operators take as input a stream and return an instantaneous relation $R$ according to the window definition. $R$ is composed by tuples belonging to a window and can be processed as a static relation. Relation-to-Relation operators correspond to basic SQL operators (projection, selection, join...). Finally, Relation-to-Stream operators allow to build a stream from one or many instantaneous relations. CQL operators and optimization techniques are presented with more details in [5]. **Box and arrow: graphical programming.** Box and arrow paradigm [2] represents an application as a direct acyclic graph (DAG) of boxes connected by arrows. A box corresponds to an operator, stateless or stateful, taking stream elements as inputs. Arrows indicate how stream elements run between boxes. Considering a set $\omega$ of predefined operators including Stream-to-Relation, Relation-to-Relation and Relation-to-Stream operators, a continuous query can be defined graphically as a box and arrow DAG where boxes are composed of operators from $\omega$. The main difference with CQL is that the continuous query can not benefit from automatic query optimization based on algebraic properties. Performances of a box and arrow DAG depend more on user implementation than a graph generated from an equivalent CQL query. **Programming patterns.** The expressiveness of pattern allow to define stateless and stateful operators. Thus, patterns take as input a stream element or a list of stream elements and apply a user-defined operator written in a programming language like JAVA, C or Python. There are two main differences between programming patterns and other continuous query languages. First, a continuous query must be defined as the composition of atomic operators. It increases deeply development and maintenance effort because each operator is user-defined. Second, the optimization of the global execution requires to rewrite operators one by one. No automatic optimization based on algebraic properties can be applied. We have exposed notions and definitions necessary to handle data streams and define continuous queries. It is important to see then how these continuous queries are executed within a DSMS. For continuous query execution, two paradigms arise: workflow-designed queries and MapReduce framework. ### 2.5 Workflow To clarify the concept of workflow, we remind the following definition. **Definition 5.** (Workflow) A workflow is a direct acyclic graph (DAG) where vertices are operators and edges define data transmission between operators. Independently from their semantics, operators can process data one by one, denoted pipeline execution (see Definition 6), or by block of data, for example aggregative operators. Operators communicate following two models: by invoking operator functions or via unbounded FIFO queues (pipeline) and buffers (block). In the context of stream processing, workflows present the advantage to be able to play on data and operator parallelism. Indeed, data streams can be processed in parallel without changing operator semantic. For example, while filtering important data stream volumes, data can be partitioned and distributed between multiple replicas of a single filter, each replica being processed independently. It is denoted as the stream partitioning pattern[19]. **Definition 6.** (Stream pipelining) Let \( P \) be an operator which can be divided into \( k \) consecutive subtasks. Each \( P_i, i \in [1;k] \), is denoted the \( i \)-th stage of \( P \) and is executed on an exclusive process. According to Definition 6, operators of a workflow can be seen as stages of a super operator. Data stream elements run then through all the stages sequentially. The limitation to stream pipelining is the presence of an aggregate operator requiring a set of data to compute a result. Definition 7. (Stream partitioning) Let $P_1, P_2, \ldots, P_k$ be $k$ independent operators. They all take as input a subset of outputs produced by an operator $P_0$. In order to process the $k$ independent operators in parallel, $P_0$ can split its outputs in $k$ partition and distribute a partition to each $P_i$, with $1 \leq i \leq k$. According to Definition 7, an operator of a workflow can split its outputs to distribute partitions among multiple operators. The partition method varies according to the semantic of next operators. For example, an operator $A$ distributing its outputs to $k$ replicas of a same logic operator $B$. In that case, $A$ can split its outputs only considering load balancing between replicas of $B$. But, if $A$ is distributing its outputs to distinct and independent operators $B$ and $C$, the partition policy depends on $B$ and $C$ semantic. Finally, a solution is that $A$ replicate all its outputs for each following operator according to a workflow. 2.6 MapReduce paradigm MapReduce [12] is a well-known framework developed initially to process huge amount of disk-based data on large clusters. The strength of this framework is to offer great parallelism with a simple programming paradigm. Actually, the core of any MapReduce application relies on two functions: Map and Reduce. These generic functions are defined as follow according to [12]: - **Map** $(\text{key}_{in}, \text{val}_{in}) \rightarrow \text{list}(\text{key}_{inter}, \text{val}_{inter})$ - **Reduce** $(\text{key}_{inter}, \text{list}(\text{val}_{inter})) \rightarrow \text{list}(\text{val}_{out})$ As mentioned above, MapReduce framework aims disk-based data processing. Contrary to DBMS, MapReduce-based systems do not rely on data model to optimize treatments. In order to distribute great amount of data on a large cluster, data are partitioned with regards to cluster configuration (e.g. number of nodes executing Map and Reduce functions). Each partition is identified with a key used to affect the partition to a Map node. The scheduling between partitions and Map nodes follows distribution strategies like Round-Robin in order to balance computation load as good as possible. Each Map node executes the user-defined Map function on one or many partitions. The function produces a list of intermediate key/value list pairs depending on partition contents. Map phase outputs are then shuffled and sorted in order to make the Reduce phase easier. An optional phase, called Combine, can be processed on each Map node. The Combine phase consists in applying the Reduce function on Map outputs in order to have results for each partition. It may be useful while having potentially several redundant computation like in [8]. Each Reduce node gathers intermediate key/value list pairs and computes a list of value which are final results. 3 Related works Previous surveys\cite{7,22} presents some workflow-based DSMS like Aurora\cite{2}, STREAM\cite{4} and TelegraphCQ\cite{10} appeared in order to deal with these new issues. Main objectives are to provide: - Continuous query definition on data streams in a high level language including windowing schema support. - A query execution architecture that produces results as fast as possible and minimize number of computation thanks to result mutualization. - Structures to avoid that input rates overwhelm the query processor. After works described in the survey\cite{7}, the MapReduce framework appears as a robust solution that scales up easily for highly parallel batch processing. Some solutions based on MapReduce have emerged \cite{8,24,3,15}. An other survey\cite{14} presents some patterns related to ability of a DSMS to dynamically adapt their treatments to their execution environment. Those patterns are merged behind the notion of the elastic stream processing. In addition, a survey\cite{19} exposes patterns and infrastructures dealing with failover management and treatment parallelization. It is relevant to present those issues in order to offer a global overview of stream processing deadlocks. In this context, we suggest, through this survey, an up-to-date overview of stream processing techniques and management systems covering and comparing workflow-based and MapReduce-based DSMS. 4 Classification of stream processing systems This section aims at facilitating the comparison of recent parallel and/or distributed DSMS according to some performance features. Our classification is based on three criteria of comparison. The first criterion concerns the paradigm of the solution, \textit{i.e.}, the topology of a continuous query within a DSMS. Facing our constraints of parallelism and distribution, two paradigm are found in the literature: the former, named workflow-based, consists in turning a continuous query into a workflow (see Section 2.5) and distribute operators among nodes while the latter, named Map Reduce-based, consists in exploiting the Map Reduce paradigm to massively parallelize query processing. Hybrid approaches will be also considered. The second criterion allows to separate systems supporting \textbf{window incremental processing} or not. As presented in Section 2.2, window iterations can share panes. Panes size can be determined directly from window specifications. A DSMS can take advantage of this knowledge to compute results on panes and store them for mutualization between consecutive iterations of a sliding window. It aims at reducing computations only relying on an appropriate stream discretization. Moreover, it allows to process in parallel consecutive panes and merge results after. In order to discretize streams according to panes, it requires that a DSMS includes window-oriented strategy and identification management at least for data scheduling or, in addition, for stream acquisition. Moreover, pane management within a DSMS can open to other window-based improvements. We consider that a DSMS unable to process windows incrementally, includes window batch processing mechanisms. The last criterion includes the support of the parallel execution and allows to distinguish centralized multi-core solutions from distributed ones. Indeed, it appeared, like for batch processing, that the exploitation of a cluster of machines becomes necessary to scale up DSMS applications. Figure 2: Parallel Data Stream Management Systems classification Figure 2 depicts our classification of different parallel stream processing techniques according to the previous criteria. For instance, the Borealis system is classified as a workflow-based DSMS computing window iteration results incrementally with the possibility to distribute query processing on a cluster. Moreover, an extended classification considering other types of query (i.e., no continuous query) and other levels of data granularity is available here. The rest of this section provides details for each solution classified according to our criteria. 4.1 Workflow-based solutions Workflow-based solutions are chronologically the first to be developed. We suggest, for the remainder of this section, a generic runtime architecture. It is composed of three layers. The acquisition layer is an interface between inputs (raw data streams) and the query engine that execute the workflow. This layer includes acquisition units able to operate basic treatments on input streams like data fusion and data partitioning or --- 6 [Link to classification page](http://liris.cnrs.fr/roland.kotto-kombi/PDSMS/classification/) complex treatments like *load shedding*. Load shedding aims at absorbing input stream variations in order that the processing layer respects latency and quality constraints. The processing layer is composed of five components. First, a workflow compiler turns a query or a workflow into an executable *query plan*. Second, a data dispatcher routes data to the appropriate operators. Then, a scheduler applies a strategy to distribute operators on processing units. Scheduling strategies could be based on operators cost, selectivity or inter-unit traffic. Next, operators are allocated on a processing unit in order to be executed. They potentially belong to multiple query plans. These operators are executed on physical computation nodes on the infrastructure layer thanks to a resource manager. **Window batch processing** Firstly, we consider a centralized multi-core solution named *Aurora*[2]. Even if this DSMS includes window support thanks to an *Aggregate* operator[2], it does not compute results on panes but on complete windows. The lack of pane management prevents it to mutualize computations within a workflow. Moreover, considering time-based windows, the size of a window may vary deeply and require disk storage before processing. To tackle this issue, a timeout can be applied to the Aggregate operator. It avoids out-of-memory errors and guaranties a theoretic maximal end-to-end latency. Aurora is a workflow-based solution running on centralized multi-core architecture. The objective of Aurora is to suggest a data stream-oriented solution that is not based on existing batch-oriented management systems. If we consider the Figure 3 anew, the acquisition units are implemented by queues with priority policies. The scheduling strategy is based on Quality-of-Service (QoS) specifications defined by the user (e.g., latency). In Aurora, a continuous query corresponds to a workflow. This workflow is denoted an application and is composed by operator boxes. Boxes are linked by arrows to define how data streams run into the application. Aurora provides not only graphical tools to define applications, but also a syntax to write continuous queries in a SQL-like language based on a set of predefined primitives. Moreover, these primitives implement stateless and stateful operators. To execute stateful operators, some additional buffers are used in the processing layer. Aurora aims applications like sensor network monitoring. In fact, Aurora is limited to small scale applications because of the capacities of memory and cores. Amongst distributed solutions that are tuple-oriented and based on workflows, we have identified some Aurora extensions [11, 9] and some Apache solutions [23]. As direct extension of Aurora project, Aurora* [11] aims at extending Aurora architecture with a computation cluster composed by heterogeneous machines. All machines must be under a same administration domain. Indeed, a master has to be able to gather information about each machine and distribute application fragments between machines. Aurora* applications can be write exactly like Aurora applications. Operator distribution is completely manage by the master’s scheduler. It limits tuning opportunities to manage specifically operators. The gain obtained through distribution allows to target medium and large scale stream-based applications. Targeted applications are network monitoring, location tracking services or fabrication line management. Aurora* installed main tools to distribute Aurora’s applications on a cluster of machines. Nevertheless, many stream-based applications can be processed on clusters which are physically separated and grouped under administrative entities. These administrative entities encapsulate information on cluster nodes. The DSMS Medusa [9] aims then at providing an infrastructure to support federated operation across administrative boundaries. Actually, the master does not have information about each machine. The global cluster is composed of participant, themselves composed of a variable number of heterogeneous machines. Medusa distributes application fragments between participants. Each participant has, at the initialization, a same amount of economical resources. They can use those resources to delegate operators to an other participant in case of potential or effective overload. The global system follows microeconomic concept to reach a stability. Medusa relies on Aurora interface to define continuous queries. As Aurora*, Medusa targets medium and large scale stream-based applications. Apache Storm\(^7\) aims at providing a framework for scalable and fault-tolerant stream processing. It relies on a cluster of heterogeneous machines. According to Storm architecture, a machine, called a worker node is composed of workers. Each worker contains a variable number of slots. A slot is an available computation unit which has dedicated CPU resources. When an operator is affected to a slot, it is encapsulated in \(^7\) https://storm.apache.org/ an executor. According to our generic architecture, processing units are equivalent to workers. Each operator corresponds to an executor. Contrary to Aurora’s architecture, acquisition units are explicitly operators of workflows in Storm. Continuous queries are represented as user-defined workflows, denoted as topologies. These topologies are also workflows but vertices belong to two main categories: Spout and Bolt. Spouts are data transmission nodes and can be conceptualized as multiplexers. They provide an interface between sources and processing environment. After getting connected to one or many sources, they transmit data streams to one or many Bolts. Each Bolt execute a user-defined operator which is considered as atomic. Storm does not include primitives but provides programming patterns (see Section 2.4). Basically, Storm does not support stateful operators, so naturally does not support window incremental processing, but they can be added through API extension. Allocation of executors on workers is achieved by a scheduler minimizing CPU and memory usage. Storm targets applications handling huge volume of data like social data management. T-Storm [23] extends Storm by suggesting a traffic-aware scheduler to handle potential network bottleneck. Topologies tends to be split in more dense subgraphs before operator allocations on slots. Window incremental processing Some centralized solutions [4, 10] have appeared including window-based techniques due to the need to mutualize some calculus during sliding windows computations. STREAM[4] aims at suggesting a data stream-oriented solution supporting the declarative and expressive SQL-like query language CQL. On the architecture level, STREAM differs from Aurora on two major aspects: i) the acquisition units exploit the Stream-to-Relation operators defined in CQL (see Section 2.4) and ii) the scheduler module is based on the chain scheduling algorithm minimizing memory usage during runtime[6]. Thanks to CQL, STREAM can automatically optimize SQL-like queries and turn them into query plan as explained in Section 2.5. As Aurora, STREAM supports stateless and stateful operators. Instead dedicating a core for a workflow, STREAM can identify operators common to multiple workflows and mutualize computations. But, like Aurora, it aims small scale applications. Targeted applications by STREAM deal with real-time monitoring like financial analysis. TelegraphCQ[10] aims at offering a flexible infrastructure for fault-tolerant stream processing. TelegraphCQ totally differs from the generic architecture. Acquisition units are replaced by ingress and caching modules like a sensor proxy to create an interface with external data sources. TelegraphCQ engine, called Eddy, replaces processing units and considers a workflow only like an operator set with a routing policy. Operators are connected to the Eddy and all operator inputs and outputs pass through him. TelegraphCQ provides an hybrid interface for continuous query definition. Actually, it offers a set of predefined operators (project, join, filter, tumble...). But these operators can only be combined for the definition of operators respecting programming patterns (see Section 2.4). A TelegraphCQ application is then a workflow of user-defined stateless and stateful operators and each operator is the composition of primitive. Like Aurora and STREAM, it can handle small scale applications because the star-like model, centred on the Eddy, may generate an important inter-operator traffic increasing deeply with data stream volumes. Targeted applications are event-based business processing. Amongst existing distributed workflow-based solutions, we identify Borealis\cite{1}, solution derived from Aurora*, as a distributed solution which considers windows for performance improvement. The main objective of Borealis is to improve result quality by softening the impact of real-world stream defects during processing. Indeed, some anomalies may be due to not only an inefficient tuple transport inside an overloaded network but also the emission of incorrect tuple values. Corrections can be done by dynamically revising query results and modifying query specifications (e.g., modifying latency between two consecutive results). Taking advantage of Aurora* and Medusa improvements, Borealis can be executed under one or multiple administrative domains. As Aurora* and Medusa Borealis is based on Aurora’s graphical interface and syntax for continuous query definition. Borealis aims more specifically applications based on self-correcting stream sources like financial service applications. <table> <thead> <tr> <th></th> <th>Aurora</th> <th>Aurora</th> <th>Medusa</th> <th>Borealis</th> <th>STREAM</th> <th>Storm</th> <th>T-Storm</th> <th>Telegraph</th> <th>CQ</th> </tr> </thead> <tbody> <tr> <td><strong>Execution support</strong></td> <td>centralized</td> <td>multi-core</td> <td>distributed</td> <td>distributed</td> <td>multi-core</td> <td>distributed</td> <td>distributed</td> <td>centralized</td> <td>multi-core</td> </tr> <tr> <td><strong>Continuous query definition</strong></td> <td>graphical</td> <td>graphical</td> <td>graphical</td> <td>graphical</td> <td>CQL or graphical</td> <td>API</td> <td>API</td> <td>API</td> <td></td> </tr> <tr> <td><strong>Workflow terminology</strong></td> <td>application</td> <td>application</td> <td>application</td> <td>application</td> <td>query plan</td> <td>topology</td> <td>topology</td> <td>query plan</td> <td></td> </tr> <tr> <td><strong>Vertices terminology</strong></td> <td>boxes</td> <td>boxes</td> <td>boxes</td> <td>boxes</td> <td>operators</td> <td>spouts or bolts</td> <td>spouts or bolts</td> <td>modules</td> <td></td> </tr> <tr> <td><strong>Primitive operators</strong></td> <td>yes</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>no</td> <td>no</td> <td>yes</td> <td></td> </tr> <tr> <td><strong>Incremental window support</strong></td> <td>no</td> <td>no</td> <td>no</td> <td>yes</td> <td>yes</td> <td>no</td> <td>no</td> <td>yes</td> <td></td> </tr> <tr> <td><strong>Operator scheduling</strong></td> <td>QoS-based</td> <td>QoS-based</td> <td>Contract-based</td> <td>QoS-based</td> <td>CPU and memory based</td> <td>CPU and memory based</td> <td>CPU, memory and network traffic based</td> <td>CPU and memory based</td> <td></td> </tr> <tr> <td><strong>Failover management</strong></td> <td>no</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>no</td> <td>yes</td> <td>yes</td> <td>no</td> <td></td> </tr> <tr> <td><strong>Quality evaluation</strong></td> <td>yes</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>no</td> <td>no</td> <td>no</td> <td></td> </tr> <tr> <td><strong>Quality definition scope</strong></td> <td>workflow node</td> <td>execution node</td> <td>execution node</td> <td>vertex</td> <td>vertex</td> <td>-</td> <td>-</td> <td>-</td> <td></td> </tr> <tr> <td><strong>Application example</strong></td> <td>sensor monitoring</td> <td>network monitoring</td> <td>network monitoring</td> <td>stock market analysis</td> <td>financial analysis</td> <td>stock market analysis</td> <td>stock market analysis</td> <td>event-based business processing</td> <td></td> </tr> </tbody> </table> Fig. 4: Workflow-based DSMS features To sum-up (see Figure 4), workflow-based DSMS can process sets of stateless and stateful operators on data streams[2, 4]. The definition of a workflow can be done following two ways. On one hand, a workflow is derived from a global continuous query[4]. These DSMS benefit from algebraic optimization. On the other hand, a workflow is defined operator per operator. Each operator is composed of predefined operators[2, 10] or through an API[23]. They all take advantage of stream pipelining but there are different scopes: multi-core on a single machine [2, 4] or distributed on a cluster of heterogeneous[11, 9, 1, 23]. This scope impacts the scalability of a DSMS. 4.2 MapReduce-based solutions MapReduce-based DSMS [8, 16, 24, 3] are all designed to support windowing schema and they can not return results as soon as data arrives in the system. They must consider finite substreams which corresponds to window definition. Nevertheless, it is rarely relevant to process an entire window at a single time because it could represent huge volume of data and delay the next window treatment. In this context, an other window-oriented solution appeared to tackle stream discretization issue. We suggest to separate MapReduce-based DSMS in two categories: Pipeline DSMS [8, 16] which executes asynchronously Map and Reduce phase as soon as they receive new inputs, and Buffer-based DSMS [24, 3] which collects data pane-per-pane and computes Map and Reduce phase after. Moreover, it is worth noting stream elements are turned into key/value pairs to fit with MapReduce framework. Timestamps are neither key nor value but only metadata to support windowing schema. Most phases described in Section 2.6 are also implemented with some variations. The main difference with batch-oriented systems like Hadoop is that sources are data streams. Beyond the obvious acquisition challenge data streams represents, it is also important to notice that handling multiple sources is difficult. For some systems [8, 3], handling multiple sources is solved by routing elements considering their respective keys. Pipeline systems rely on an asynchronous execution of Map and Reduce phase. A pipeline DSMS based on MapReduce, called Continuous-MapReduce (C-MR) suggests an implementation of asynchronous MapReduce. The aim is to take advantage of stream partitioning (see Definition 7) based on key value. In C-MR [8], data streams are infinite sequence of key/value pairs. As soon as a new element arrives in the system, it triggers a Map operation. A node can then execute a Map operation without considering which operation is running on other nodes. Elements are routed to Mappers considering their respective keys. A specificity of C-MR is that the Combine phase, described in Section 2.6, is mandatory. Thanks to the Combine phase, C-MR is able to generate a pane oriented scheduling. Indeed, Map phase outputs are sorted according to their timestamps and gathered on a node executing a Reduce function on a pane content. To materialize the end of a pane during the execution, punctuation mechanisms are used. Each source sends a specific tuple which marks that all tuples for a given pane have been sent. When a Combine node receives punctuation tuples from all sources, the execution starts. C-MR essentially exploits these Combine nodes to avoid redundant computations between consecutive sliding windows. An other solution based on Hadoop[15] aims to take advantage of stream pipelining between mappers and reducers. The main contribution relies on a scheduler based on hash functions optimizing jobs. Some buffer-based solutions have been developed. The objective of those DSMS is to discretize data streams and process each batch as any disk-based data. Nevertheless, a complete MapReduce job is not triggered from scratch for each batch. Apache Spark Streaming[24] brings the Apache Spark Engine to stream processing. Apache Spark is a parallel engine which executes DAG obtained from a SQL-derived query or a MapReduce job. A graph is decomposed in MapReduce jobs by Apache Spark and its execution is optimized to fit on main memory. Spark Streaming appears then as an interface receiving tuples from data streams and discretizing them into multiple batches. It directly corresponds to our acquisition layer. Each batch is then processed by the Spark Engine. According to Spark Streaming terminology, data streams are turned into Discretized Stream, or DStream. A DStream is a potentially infinite sequence of Resilient Distributed Dataset (RDD). A RDD is defined by a timestamp range and it is important to notice that this range is equivalent for all RDDs. A RDD can be considered as a pane (see Figure 5) explicitly defined by a user. Spark Streaming supports the definition of MapReduce jobs on sliding windows. As window size is a whole number of RDDs, Spark Streaming can foresee which RDDs are involved in multiple window computations. They are then cached as long as a window benefits the intermediate result. It belongs to the second generation of stream processing engines for large scale applications like social network data management. The motivation of iMR [16] is that many log processing applications produce much less data than they consume. In this context, it is relevant to process data streams locally and then send results for storage. Runtime architecture of iMR is similar to C-MR’s architecture except operator granularity. The aim remains to group pane results for potential multiple reuses. But an important distinction is that iMR triggers a Map/Reduce operation for a list of elements. In addition, iMR suggests an uncombine operator to allow incremental operations. The physical implementation of iMR relies on a cluster of machines. The heterogeneity of machines is not handled by iMR’s resources manager. M^3[3] is an implementation for MapReduce execution exclusively on main memory for stream processing. The objective is to suggest a MapReduce DSMS resilient to input rate variations by dynamically revising discretization parameters. Instead of fixing the acquisition buffer size, M^3 is based on a dynamic load balancing mechanism between Map nodes. In fact, stream discretization aims at processing approximately the same amount of data instead of triggering an execution at fixed timestamps. --- 8 https://spark.apache.org/ To summarize (see Figure 5), MapReduce-based DSMS integrate window-oriented schedulers. The objective is to obtain intermediate results on windows or subwindows for reuse. They take advantage of stream partitioning (see Definition 7) in order to parallelize computations. In comparison with workflow-based DSMS, the definition of continuous queries can be done operator per operator through MapReduce APIs [8, 16, 3] but also globally with a SQL-derived [24]. ### 4.3 Hybrid solutions Hybrid DSMS rely on continuous queries represented by a workflow as defined in Aurora*. But contrary to workflow-based DSMS, data are turned into events [17]. An event is a key/value like MapReduce inputs. The key represents the type of the data and the value, denoted the attribute, corresponds to the tuple of the stream. **Window batch processing S4** [17] is an hybrid DSMS without window-based optimization. It enriches workflow representation. Indeed, each operator is associated to two metadata: a type list of consumed events and a value list for each of them. It forms a *Processing Element* according to S4 terminology. Each Processing Element has a dedicated buffer which contains data to process. Data are grouped by key to make operator execution easier like Reduce operations. Operators are user-defined but must respect patterns to guaranty that their execution can be done in parallel. Even if S4 benefits from both paradigms, it lacks many features. Contrary to most workflow-based DSMS, S4 does not support a continuous query language. It compels developers to define ad hoc operators for each query. Moreover, queries must be designed as a set of Processing Element operators so difficult to translate for other DSMS. It does not support windowing schema natively. Nevertheless, the integration of windowing schema only requires to set time-based buffers for relevant Processing Element, e.g., an aggregate operator. Finally, S4 defines static routes between operators according to the global workflow. Window incremental processing Based on a similar hybrid architecture than S4 but including window-based scheduling strategies, ESC [20] is a DSMS aiming real-time stream analysis like pattern mining. ESC has been developed as a platform for Cloud computing. The execution support must be a cluster of homogeneous virtual machines in terms of performances. It includes CPU speed, main memory capacity and also bandwidth. Like S4, ESC represents a query as a DAG where each vertex is a Processing Element. ESC offers more flexibility than S4 because input Processing Elements are similar to Apache Storm Spouts [23]. Actually, they do not execute a specific task on data but only get connect to multiple sources and send data to other Processing Elements which execute operators specified by the workflow. Data are represented as events different from S4 ones. ESC events are sets of four elements: a type, a context, a key and an associated value. The key/value pair is exploited as it is in S4 architecture. The type is the semantic representation of an event. For example while processing stock market data, the type can be "stock value" or "stock evolution" and the key, the acronym representing a given stock. The context corresponds to the query interested by the event. Comparing to S4, ESC suffers less drawbacks because window support is effective through time-based buffers. In fact, tumbling windows are supporting thanks to a tick mechanism. ESC includes a clock which emits a tick at regular interval. When a tuple arrives in the system, it is affected to the closest tick which is used as timestamp. In addition, all operators are synchronized and must process data belonging to their respective buffers and flush them after. Nevertheless, ESC only provides function patterns to define operators. Apache Flink is a DSMS which relies on a distributed streaming dataflow engine. As Storm, Flink allows the definition of a continuous query as a workflow of operators sending their outputs to one or many other operators. Nevertheless, Flink supports a SQL-like language enriched with the operators Map and Reduce. In this way, Flink is clearly an hybrid DSMS exploiting both stream pipelining and stream partitioning. Finally, Flink implements a specific memory management to tackle garbage collection issues. We have exposed several DSMS according to our classification and it emerges that stream processing on tumbling or sliding windows suffers many constraints. Workflow-based solutions [2, 10, 9] aims at providing results as soon as possible for monitoring and real-time analysis applications. They are mostly expressive [4] through the definition of a continuous query language [5] which includes stateful operators and ex- licit windowing schema support. Comparing to Workflow-based DSMS, MapReduce-based DSMS [8, 24, 16, 3] are designed to store and reuse intermediate results. Our classification relies on logical criteria like the integration of window-based optimization, conceptual like the paradigm and physical like the execution support. Nevertheless, aspects dealing with the adaptation of DSMS to their environment of execution must also be considered for a complete overview. 5 Complementary issues for stream processing This section aims at covering most aspects for resilient DSMS design. In a first time, we introduce some dynamic optimization techniques for elastic stream processing used in DSMS presented in Section 4. Secondly, failover issues are presented. Some existing solutions are presented with their limits. Then we introduce Quality-of-Service evaluation and different variants existing in some DSMS. 5.1 Elastic stream processing As introduced in Section 2.1, data streams are unpredictable. Input rates may vary deeply in terms of volume and distribution of values at any moment during runtime. It may lead to bottlenecks at the acquisition and processing layers. It is then an important aspect to consider in the analysis of a DSMS in order to estimate his availability during execution. Dynamic reconfiguration patterns, gathered under the notion of elastic stream processing[14], are exploited by DSMS to tackle this issue. Spark Streaming[24] does not integrate auto-reconfiguration mechanisms because of its architecture. Actually, Spark Streaming sets its acquisition units to process RDD per RDD (see Figure 5) and bases mutualization of computation on this strategy. Nevertheless, tuning opportunities are provided like the configuration of the level of task parallelism. Spark Streaming also allows to defined each RDD size on its memory size in order to avoid triggering execution for a low amount of data. S4[17] decentralized architecture prevents global reconfiguration of the workflow. Processing units are managed locally and workflow edges are static. However, load balancing is managed locally by each processing nodes. As reminder, S4 operators, denoted processing elements, are grouped by processing nodes. A processing node belongs to a single physical machine. Instead managing load balancing, iMR[16] is able to apply adaptive load shedding policies on each processing unit to balance input rate increases. The architecture of M³ allows to dynamically adapt acquisition units, more precisely buffer sizes, to gathered multiple panes in a buffer or acquire a pane on multiple buffers. C-MR[8] includes more sophisticated elastic mechanisms. Scheduling strategies can be enabled to decrease as much as possible global latency (Oldest-Data-First strategy) or decrease memory usage to avoid out-of-bound errors (Memory-Conservative strategy). Storm bases operator sliding more on resources required by an executor (see Section 4). The objective is to balance load among worker nodes. As described in Section 4, T-Storm modifies Storm’s scheduler by considering in priority inter-worker traffic generated by an execution layout. But those strategies do not take advantage of operator reordering [14]. Actually, some operators can be compute without changing final outputs, e.g. commutation of filters. Operator reordering is exploited by ESC[20]. The master process analyses, during runtime, costs of operators, and modifies execution order if necessary. In the same way, STREAM[4] takes advantage of SQL support to operate a dynamic algebraic optimization. STREAM optimizer is based on operator selectivity and average end-to-end latencies. Aurora[2] suggests many elastic mechanisms even it can not take advantage of a cluster of machines. Operator boxes (see Section 4) can be combined to be executed in a single time. For example, a projection of some attributes followed by a filter can be processed in a tuple-per-tuple on a single node. Moreover, projections can be automatically added in an application to decrease tuple size without loosing relevant attributes. Aurora also integrates operator reordering like ESC and load shedding. Finally, Aurora is able to dynamically change some operator implementations like joins. Algorithm selection [14] requires to monitor each box end-to-end latency in order to evaluate if an other implementation can process data more efficiently. Aurora’s extents (Aurora*[11], Medusa[9], Borealis[1]) includes all Aurora’s elastic mechanisms and implements to mechanisms for load balancing among nodes : operator sliding and operator split[11]. An operator slides from an execution node to an other to avoid local overload. Operator split aims at taking advantage of data partitioning to parallelize the execution of an operator. Finally, TelegraphCQ[10] integrates operator reordering thanks to its centralized processing engine Eddy[10]. 5.2 Failover management Failover management aims at balancing dynamically execution environment variations which decrease safety of the system. Contrary to elastic stream processing, safe failover induces resource consumption at any time during the execution to prevent quality degradation because of a failure. Considered failures are complete node failures (neither CPU nor memory are available) implying data loss. The aim is then to prevent those losses through three failover patterns [19] for safety improvement: simple standby, checkpointing and hot standby. DSMS supporting only a centralized multi-core architecture (Aurora, TelegraphCQ, C-MR) do not integrate failover management. Some DSMS make a compromise between end-to-end latency and failover management. Indeed, S4 and iMR accept lossy processing. If iMR provides an explicit feedback on data losses, S4 only restarts a new node with lost operators on current data. M³ and Spark Streaming rely on a cluster of Zookeeper9 nodes to operate checkpointing[19] and restart failed operators on one or many active nodes. The distinction between Spark Streaming and M³ is that M³ stores operator states on main memory exclusively. Storm and T-Strom use a heartbeat mechanism to detect node 9 https://zookeeper.apache.org/ failures. Actually, the master node receives continuously heartbeats from worker nodes and try to restart them if a heartbeat is not received for a predefined timeout. Operator states are stored on a shared memory of a Zookeeper cluster or on disk. Aurora’s extents (Aurora*, Medusa and Borealis) guaranty $k$-safety. The failure of any node $k$ does not impact the final result of an application. In order to offer that guarantee, these DSMS discard tuples lazily. A tuple is lazily discarded if it is deleted only after it does not serve as input to any operator of a workflow. 5.3 Quality evaluation Stream processing implies to process continuous queries on potentially infinite data. As exposed above, execution environment may induce irreversible data loss. Some DSMS [2, 1, 16] aim at providing a feedback to end-users on data losses. This feedback is conceptualized as a quality score. The quality is used as a threshold to make the difference between satisfying and unsatisfying results. Aurora [2] integrates quality score included in a Quality-of-Service (QoS) specification. QoS is defined as a function depending on performance or result accuracy expectancy. Users do not define a threshold value for quality but define which parameter to give advantage. Aurora application tends then to maximize final results’ quality scores. In order to control quality, Aurora resource management is based on QoS-aware algorithms. Nevertheless, Aurora is designed for centralized multi-core execution and QoS-aware algorithms are not adapted for distributed architectures. Indeed, QoS definitions used in Aurora do not consider that data can be lost because of network failures. In order to deal with this issue, Aurora* is able to infer intermediate QoS constraints for any node of a Aurora* cluster. Intermediate QoS constraints are inferred only for internal nodes. They are nodes which are neither connected to sources nor final outputs. Borealis [1] extends QoS specification to a more fine-grained level. Actually, Aurora* is able to infer QoS specification for each node output, each node executing a subgraph of the global Aurora application. But there is no inner-node QoS control from user’s side. In this way, Borealis[1] allows to define QoS specification for any vertex in the dataflow (see Figure 4). An other quality function, denoted $C^2$ [16], aims at providing information about the completeness to end-users. Actually, $C^2$ is defined as a spatio-temporal quality score. The spatial aspect represents resource consumption involved by the computation of the current result. The temporal component is more related to Aurora’s QoS and provides information about data loss during last window computation. 6 Conclusion In this paper, we have proposed a classification of Data Stream Management Systems according to their paradigm, their capacities to handle windowing and the type of infrastructure on which they can be deployed. To offer to readers an additional point of view about those DSMS we consider some aspects related to elastic stream processing, failover management and the evaluation of the quality of results. It appears that targeted applications are the main key to decide which DSMS will more likely deliver best performances. In the case of an application composed by complex operators handling potentially important volumes of data, updating results as soon as possible and with a tolerance to non optimal accuracy, Borealis appears as the best choice. It includes high level operators has automatic mechanisms for workflow optimization. Moreover, it supports window incremental processing and QoS specifications. Storm delivers great performances in a similar context but development and maintenance efforts are important. In an other case, an application computing periodically results that can be totally or partially reused on potentially huge volume of data will be handled efficiently by Spark Streaming. The use of Discretized Streams [24] allows mutualization of intermediate results and tuning opportunities soften elastic stream processing issues. This survey underline that in the era of Big Data and GreenIT, no solution are completely adapted and full satisfying. Indeed, existing systems suffer from not having specific optimization at each steps of the query processing. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01215287/file/surveyDistributedStreamProcessing.pdf", "len_cl100k_base": 11687, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 58157, "total-output-tokens": 14319, "length": "2e13", "weborganizer": {"__label__adult": 0.00028514862060546875, "__label__art_design": 0.0005812644958496094, "__label__crime_law": 0.00033164024353027344, "__label__education_jobs": 0.0012483596801757812, "__label__entertainment": 0.00017249584197998047, "__label__fashion_beauty": 0.0001819133758544922, "__label__finance_business": 0.0005397796630859375, "__label__food_dining": 0.0003969669342041016, "__label__games": 0.0007748603820800781, "__label__hardware": 0.0019330978393554688, "__label__health": 0.0005440711975097656, "__label__history": 0.0005612373352050781, "__label__home_hobbies": 0.00013196468353271484, "__label__industrial": 0.0007920265197753906, "__label__literature": 0.00041031837463378906, "__label__politics": 0.00037789344787597656, "__label__religion": 0.0005764961242675781, "__label__science_tech": 0.421142578125, "__label__social_life": 0.00011664628982543944, "__label__software": 0.031585693359375, "__label__software_dev": 0.53662109375, "__label__sports_fitness": 0.00023043155670166016, "__label__transportation": 0.0005545616149902344, "__label__travel": 0.000232696533203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63368, 0.027]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63368, 0.3443]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63368, 0.90306]], "google_gemma-3-12b-it_contains_pii": [[0, 1000, false], [1000, 3022, null], [3022, 5761, null], [5761, 7758, null], [7758, 10666, null], [10666, 13644, null], [13644, 16356, null], [16356, 19214, null], [19214, 22180, null], [22180, 23875, null], [23875, 25593, null], [25593, 28900, null], [28900, 32134, null], [32134, 33543, null], [33543, 35213, null], [35213, 38411, null], [38411, 41576, null], [41576, 43021, null], [43021, 46329, null], [46329, 49346, null], [49346, 52523, null], [52523, 55543, null], [55543, 56850, null], [56850, 59531, null], [59531, 62340, null], [62340, 63368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1000, true], [1000, 3022, null], [3022, 5761, null], [5761, 7758, null], [7758, 10666, null], [10666, 13644, null], [13644, 16356, null], [16356, 19214, null], [19214, 22180, null], [22180, 23875, null], [23875, 25593, null], [25593, 28900, null], [28900, 32134, null], [32134, 33543, null], [33543, 35213, null], [35213, 38411, null], [38411, 41576, null], [41576, 43021, null], [43021, 46329, null], [46329, 49346, null], [49346, 52523, null], [52523, 55543, null], [55543, 56850, null], [56850, 59531, null], [59531, 62340, null], [62340, 63368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63368, null]], "pdf_page_numbers": [[0, 1000, 1], [1000, 3022, 2], [3022, 5761, 3], [5761, 7758, 4], [7758, 10666, 5], [10666, 13644, 6], [13644, 16356, 7], [16356, 19214, 8], [19214, 22180, 9], [22180, 23875, 10], [23875, 25593, 11], [25593, 28900, 12], [28900, 32134, 13], [32134, 33543, 14], [33543, 35213, 15], [35213, 38411, 16], [38411, 41576, 17], [41576, 43021, 18], [43021, 46329, 19], [46329, 49346, 20], [49346, 52523, 21], [52523, 55543, 22], [55543, 56850, 23], [56850, 59531, 24], [59531, 62340, 25], [62340, 63368, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63368, 0.07692]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
f8f0e91fb5639ecfda3139e6d192752a37339965
[REMOVED]
{"Source-Url": "https://inria.hal.science/inria-00457129/file/Morin08c.pdf", "len_cl100k_base": 8384, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 41175, "total-output-tokens": 10706, "length": "2e13", "weborganizer": {"__label__adult": 0.0002865791320800781, "__label__art_design": 0.0003857612609863281, "__label__crime_law": 0.0002543926239013672, "__label__education_jobs": 0.0008034706115722656, "__label__entertainment": 4.905462265014648e-05, "__label__fashion_beauty": 0.00014483928680419922, "__label__finance_business": 0.00020062923431396484, "__label__food_dining": 0.0002684593200683594, "__label__games": 0.00042891502380371094, "__label__hardware": 0.00046944618225097656, "__label__health": 0.0003249645233154297, "__label__history": 0.0002282857894897461, "__label__home_hobbies": 6.943941116333008e-05, "__label__industrial": 0.0003135204315185547, "__label__literature": 0.00024247169494628904, "__label__politics": 0.00022745132446289065, "__label__religion": 0.0003955364227294922, "__label__science_tech": 0.01053619384765625, "__label__social_life": 7.653236389160156e-05, "__label__software": 0.00481414794921875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0002434253692626953, "__label__transportation": 0.0004096031188964844, "__label__travel": 0.00018203258514404297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45569, 0.02278]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45569, 0.29781]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45569, 0.88794]], "google_gemma-3-12b-it_contains_pii": [[0, 1138, false], [1138, 4368, null], [4368, 7663, null], [7663, 9889, null], [9889, 13112, null], [13112, 15345, null], [15345, 18512, null], [18512, 19805, null], [19805, 22716, null], [22716, 25976, null], [25976, 29016, null], [29016, 31948, null], [31948, 34970, null], [34970, 38498, null], [38498, 42020, null], [42020, 45569, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1138, true], [1138, 4368, null], [4368, 7663, null], [7663, 9889, null], [9889, 13112, null], [13112, 15345, null], [15345, 18512, null], [18512, 19805, null], [19805, 22716, null], [22716, 25976, null], [25976, 29016, null], [29016, 31948, null], [31948, 34970, null], [34970, 38498, null], [38498, 42020, null], [42020, 45569, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45569, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45569, null]], "pdf_page_numbers": [[0, 1138, 1], [1138, 4368, 2], [4368, 7663, 3], [7663, 9889, 4], [9889, 13112, 5], [13112, 15345, 6], [15345, 18512, 7], [18512, 19805, 8], [19805, 22716, 9], [22716, 25976, 10], [25976, 29016, 11], [29016, 31948, 12], [31948, 34970, 13], [34970, 38498, 14], [38498, 42020, 15], [42020, 45569, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45569, 0.03465]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
62beecdae36d77de81636a87d86f56dd23ddf6ea
[REMOVED]
{"len_cl100k_base": 11382, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33498, "total-output-tokens": 13640, "length": "2e13", "weborganizer": {"__label__adult": 0.0002830028533935547, "__label__art_design": 0.00038313865661621094, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.0008492469787597656, "__label__entertainment": 5.525350570678711e-05, "__label__fashion_beauty": 0.00011044740676879884, "__label__finance_business": 0.0002989768981933594, "__label__food_dining": 0.0002161264419555664, "__label__games": 0.0003600120544433594, "__label__hardware": 0.0004169940948486328, "__label__health": 0.00028228759765625, "__label__history": 0.0001742839813232422, "__label__home_hobbies": 4.482269287109375e-05, "__label__industrial": 0.00018405914306640625, "__label__literature": 0.00020873546600341797, "__label__politics": 0.00016951560974121094, "__label__religion": 0.0003018379211425781, "__label__science_tech": 0.007167816162109375, "__label__social_life": 7.408857345581055e-05, "__label__software": 0.007793426513671875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00014662742614746094, "__label__transportation": 0.0002541542053222656, "__label__travel": 0.0001379251480102539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59119, 0.0417]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59119, 0.08251]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59119, 0.90422]], "google_gemma-3-12b-it_contains_pii": [[0, 5517, false], [5517, 11391, null], [11391, 17042, null], [17042, 21433, null], [21433, 27858, null], [27858, 33100, null], [33100, 38605, null], [38605, 43412, null], [43412, 50604, null], [50604, 59119, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5517, true], [5517, 11391, null], [11391, 17042, null], [17042, 21433, null], [21433, 27858, null], [27858, 33100, null], [33100, 38605, null], [38605, 43412, null], [43412, 50604, null], [50604, 59119, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59119, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59119, null]], "pdf_page_numbers": [[0, 5517, 1], [5517, 11391, 2], [11391, 17042, 3], [17042, 21433, 4], [21433, 27858, 5], [27858, 33100, 6], [33100, 38605, 7], [38605, 43412, 8], [43412, 50604, 9], [50604, 59119, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59119, 0.15882]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
45804fd2b93a3c5c7dab5573bc0f65de7906fd71
Data Mining MTAT.03.183 Online Analytical Processing and Data Warehouses Jaak Vilo 2011 Fall Acknowledgment • This slide deck is a "mashup" of the following publicly available slide decks: - http://www.postech.ac.kr/~swhwang/grass/DataCube.ppt - http://www.cs.uiuc.edu/homes/hanj/bk2/03.ppt - Hector Garcia-Molina, Stanford University - Marlon Dumas, Univ. of Tartu, - Sulev Reisberg, Quretec & STACC - … Outline • The “data cube” abstraction • Multidimensional data models • Data warehouses Sales data example <table> <thead> <tr> <th>ID</th> <th>Region</th> <th>Store</th> <th>Category</th> <th>Product</th> <th>Date</th> <th>Sale</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Tallinn</td> <td>Yemiesde TV</td> <td>Samsung</td> <td>13.10.2011</td> <td>1000</td> <td></td> </tr> <tr> <td>2</td> <td>Tallinn</td> <td>Mustika Radio</td> <td>Sony</td> <td>10.9.2011</td> <td>1200</td> <td></td> </tr> <tr> <td>3</td> <td>Tallinn</td> <td>Lounakekus TV</td> <td>Samsung</td> <td>11.11.2011</td> <td>1150</td> <td></td> </tr> <tr> <td>4</td> <td>Tallinn</td> <td>Lounakekus TV</td> <td>Philips</td> <td>11.11.2011</td> <td>1500</td> <td></td> </tr> <tr> <td>5</td> <td>tallinn</td> <td>Mustika Radio</td> <td>Samsung</td> <td>12.9.2010</td> <td>800</td> <td></td> </tr> <tr> <td>6</td> <td>Tallinn</td> <td>Lounakekus TV</td> <td>Sony</td> <td>12.9.2011</td> <td>1200</td> <td></td> </tr> <tr> <td>7</td> <td>Tallinn</td> <td>Mustika Radio</td> <td>Philips</td> <td>11.11.2011</td> <td>350</td> <td></td> </tr> <tr> <td>8</td> <td>Tallinn</td> <td>Lounakekus TV</td> <td>Sony</td> <td>11.11.2011</td> <td>1150</td> <td></td> </tr> </tbody> </table> Excel pivot table Example: Sales <table> <thead> <tr> <th>Market</th> <th>Amount</th> <th>Product</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>Atlanta</td> <td>$8,000</td> <td>Apples</td> <td>$8,000</td> </tr> <tr> <td>Chicago</td> <td>$8,000</td> <td>Cherries</td> <td>$8,000</td> </tr> <tr> <td>Denver</td> <td>$8,000</td> <td>Grapes</td> <td>$8,000</td> </tr> <tr> <td>Detroit</td> <td>$8,000</td> <td>Melons</td> <td>$8,000</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Quarter</th> <th>Total Qtr 1</th> <th>Total Qtr 2</th> <th>Total Qtr 3</th> <th>Total Qtr 4</th> </tr> </thead> <tbody> <tr> <td>1st Time</td> <td>$32,000</td> <td>$32,000</td> <td>$32,000</td> <td>$32,000</td> </tr> </tbody> </table> Table 2.1: Three different views of fruit sales: time, market, and product Jaak Vilo and other authors UT: Data Mining 2009 Multidimensional View of Sales - Multidimensional analysis involves viewing data simultaneously categorized along potentially many dimensions <table> <thead> <tr> <th>Sales</th> <th>Multidimensional view of sales</th> </tr> </thead> <tbody> <tr> <td>Qu 1</td> <td>400.00</td> </tr> <tr> <td>Qu 2</td> <td>500.00</td> </tr> <tr> <td>Total</td> <td>900.00</td> </tr> </tbody> </table> Typical Data Analysis Process - **Formulate** a query to extract relevant information - **Extract** aggregated data from the database - **Visualize** the result to look for patterns. - **Analyze** the result and formulate new queries. - **Online Analytical Processing (OLAP)** is about supporting such processes - OLAP characteristics: No updates, lots of aggregation, need to visualize and to interact - Let’s first talk about aggregation… Relational Aggregation Operators - SQL has several aggregate operators: - `SUM()`, `MIN()`, `MAX()`, `COUNT()`, `AVG()` - The basic idea is: - Combine all values in a column into a single scalar value - Syntax ```sql SELECT AVG(Temp) FROM Weather; ``` The Relational **GROUP BY** Operator - **GROUP BY** allows aggregates over table sub-groups ```sql SELECT Time, Altitude, AVG(Temp) FROM Weather GROUP BY Time, Altitude; ``` Limitations of the GROUP BY - Group-by is one-dimensional: one group per combination of the selected attribute values <table> <thead> <tr> <th>Model</th> <th>Year</th> <th>Color</th> <th>Sales</th> </tr> </thead> <tbody> <tr> <td>Chevy</td> <td>1994</td> <td>Black</td> <td>50</td> </tr> <tr> <td>Chevy</td> <td>1995</td> <td>Black</td> <td>85</td> </tr> <tr> <td>Chevy</td> <td>1994</td> <td>White</td> <td>40</td> </tr> <tr> <td>Chevy</td> <td>1995</td> <td>White</td> <td>115</td> </tr> </tbody> </table> 1. Calculate total sales per year 2. Compute total sales per year and per color 3. Calculate sales per year, per color and per model Grouping with Sub-Totals (Pivot table) - Sales by Model by Year by Color <table> <thead> <tr> <th>Model</th> <th>Year</th> <th>Color</th> <th>Sales</th> </tr> </thead> <tbody> <tr> <td>Chevy</td> <td>1994</td> <td>Black</td> <td>50</td> </tr> <tr> <td></td> <td></td> <td>White</td> <td>40</td> </tr> <tr> <td></td> <td>1995</td> <td>Black</td> <td>85</td> </tr> <tr> <td></td> <td></td> <td>White</td> <td>115</td> </tr> </tbody> </table> - Note that sub-totals by color are missing, if added it becomes a cross-tabulation Grouping with sub-totals (cross-tab) <table> <thead> <tr> <th>Model</th> <th>Year</th> <th>Color</th> <th>Units</th> </tr> </thead> <tbody> <tr> <td>Chevy</td> <td>1994</td> <td>Black</td> <td>70</td> </tr> <tr> <td></td> <td></td> <td>White</td> <td>40</td> </tr> <tr> <td></td> <td>1995</td> <td>Black</td> <td>50</td> </tr> <tr> <td></td> <td></td> <td>White</td> <td>115</td> </tr> <tr> <td>Total</td> <td></td> <td>Black</td> <td>120</td> </tr> <tr> <td></td> <td></td> <td>White</td> <td>150</td> </tr> </tbody> </table> Grouping with Sub-Totals (Relational version) <table> <thead> <tr> <th>Sales by Model by Year by Color</th> <th>Sales by Model by Year</th> <th>Sales by Model</th> </tr> </thead> <tbody> <tr> <td>Chevy 1994 Black 50</td> <td>Chevy 1994 Black 50</td> <td></td> </tr> <tr> <td>Chevy 1994 White 40</td> <td>Chevy 1994 White 40</td> <td></td> </tr> <tr> <td>Chevy 1995 Black 85</td> <td>Chevy 1995 Black 85</td> <td></td> </tr> <tr> <td>Chevy 1995 White 115</td> <td>Chevy 1995 White 115</td> <td></td> </tr> <tr> <td>Chevy 1995 Black 115</td> <td>Chevy 1995 Black 115</td> <td></td> </tr> </tbody> </table> Sub-totals by color are still missing... SQL Query ``` SELECT 'ALL', 'ALL', 'ALL', SUM(Sales) FROM Sales WHERE Model = 'Chevy' UNION SELECT Model, 'ALL', 'ALL', SUM(Sales) FROM Sales WHERE Model = 'Chevy' GROUP BY Model UNION SELECT Model, Year, Color, SUM(Sales) FROM Sales WHERE Model = 'Chevy' GROUP BY Model, Year, Color ``` Adding the colors... ``` SELECT 'ALL', 'ALL', 'ALL', SUM(Sales) FROM Sales WHERE Model = 'Chevy' GROUP BY Model ``` CUBE and Roll Up Operators The Cube • An Example of 3D Data Cube - By Make & Year - By Make & Color - By Color & Year - By Year - By Make - By Color - Sum Cube: Each Attribute is a Dimension • N-dimensional Aggregate (sum(), max(),...) – Fits relational model exactly: • \( a_1, a_2, ..., a_N, f() \) • Super-aggregate over N-1 Dimensional sub-cubes • \( \text{ALL}, a_2, ..., a_N, f() \) • \( a_1, \text{ALL}, a_3, ..., a_N, f() \) • \( \text{...} \) • \( a_1, a_2, ..., \text{ALL}, f() \) – This is the N-1 Dimensional cross-tab. • Super-aggregate over N-2 Dimensional sub-cubes The Data Cube Concept Sub-cube Derivation • Dimension collapse, * denotes ALL CUBE Operator Possible syntax • Proposed syntax example: – SELECT Model, Make, Year, SUM(Sales) FROM Sales WHERE Model IN {"Chevy", "Ford"} AND Year BETWEEN 1990 AND 1994 GROUP BY CUBE Model, Make, Year HAVING SUM(Sales) > 0; – Note: GROUP BY operator repeats aggregate list • in select list • in group by list ROLLUP Operator • ROLLUP Operator: special case of CUBE Operator Return “Sales Roll Up by Store by Quarter” in 1994.: SELECT Store, quarter, SUM(Sales) FROM Sales WHERE nation="Korea" AND Year=1994 GROUP BY ROLLUP Store, Quarter(Date) AS quarter; Summary - Problems with GROUP BY - GROUP BY cannot directly construct - Pivot tables / roll-up reports - Cross-Tabs - CUBE Operator - Generalizes GROUP BY and Roll-Up and Cross-Tabs!! Cube Operator Example Now let's have a look at one... - NASA Workforce cubes http://nasapeople.nasa.gov/workforce - Btell demo reports http://www.btell.de Follow the "demo" link and start a demo, then go to reports OLAP Screen Example Warehouse Architecture Why a Warehouse? - Two Approaches: - Query-Driven (Lazy) - Warehouse (Eager) Multidimensional Data - Sales volume as a function of product, month, and region Dimensions: Product, Location, Time Hierarchical summarization paths Product - Category - Country - Quarter - City - Month - Week - Office - Day Dimension Hierarchies - Store - Region - City - Location - Snowflake schema - Constellations Terms - Fact table - Dimension tables - Measures Star <table> <thead> <tr> <th>Product</th> <th>ProductId</th> <th>Name</th> <th>Price</th> </tr> </thead> <tbody> <tr> <td>p1</td> <td>104</td> <td>bolt</td> <td>10</td> </tr> <tr> <td>p2</td> <td>105</td> <td>nut</td> <td>5</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Store</th> <th>StoreId</th> <th>City</th> </tr> </thead> <tbody> <tr> <td>c1</td> <td>nyc</td> <td></td> </tr> <tr> <td>c2</td> <td>sfo</td> <td></td> </tr> <tr> <td>c3</td> <td>la</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Sale</th> <th>OrderId</th> <th>Date</th> <th>CustId</th> <th>ProdId</th> <th>StoreId</th> <th>Qty</th> <th>amt</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>o100</td> <td>1/7/97</td> <td>53</td> <td>p1</td> <td>c1</td> <td>1</td> <td>12</td> </tr> <tr> <td>102</td> <td>o102</td> <td>2/7/97</td> <td>53</td> <td>p2</td> <td>c1</td> <td>2</td> <td>13</td> </tr> <tr> <td>105</td> <td>o105</td> <td>3/6/97</td> <td>111</td> <td>p1</td> <td>c3</td> <td>5</td> <td>50</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Customer</th> <th>CustId</th> <th>Name</th> <th>Address</th> <th>City</th> </tr> </thead> <tbody> <tr> <td>81</td> <td>jen</td> <td>tom</td> <td>10 man</td> <td>sfo</td> </tr> <tr> <td>111</td> <td>sally</td> <td>bob</td> <td>50 willow</td> <td>la</td> </tr> </tbody> </table> Star Schema Hector Garcia Molina: Data Warehousing and OLAP Cube Fact table view: <table> <thead> <tr> <th>sale</th> <th>prodId</th> <th>storeId</th> <th>amt</th> </tr> </thead> <tbody> <tr> <td>p1</td> <td>c1</td> <td>12</td> <td>12</td> </tr> <tr> <td>p2</td> <td>c1</td> <td>11</td> <td>11</td> </tr> <tr> <td>p1</td> <td>c3</td> <td>50</td> <td></td> </tr> <tr> <td>p2</td> <td>c3</td> <td>5</td> <td></td> </tr> </tbody> </table> Multi-dimensional cube: dimensions = 2 3-D Cube Fact table view: <table> <thead> <tr> <th>sale</th> <th>prodId</th> <th>storeId</th> <th>date</th> <th>amt</th> </tr> </thead> <tbody> <tr> <td>p1</td> <td>c1</td> <td>1</td> <td>12</td> <td></td> </tr> <tr> <td>p1</td> <td>c3</td> <td>1</td> <td>50</td> <td></td> </tr> <tr> <td>p2</td> <td>c2</td> <td>1</td> <td>8</td> <td></td> </tr> <tr> <td>p1</td> <td>c1</td> <td>2</td> <td>44</td> <td></td> </tr> <tr> <td>p2</td> <td>c1</td> <td>2</td> <td>4</td> <td></td> </tr> <tr> <td>p1</td> <td>c2</td> <td>2</td> <td>4</td> <td></td> </tr> </tbody> </table> Multi-dimensional cube: dimensions = 3 Star Schema Sales Fact Table - time_key - item_key - branch_key - location_key - units_sold - dollars_sold - avg_sales Measures - time - item - branch - location Snowflake Schema Sales Fact Table - time_key - item_key - branch_key - location_key - units_sold - dollars_sold - avg_sales Measures - time - item - branch - location OLTP vs. OLAP - OLTP – Online Transaction Processing - Traditional database technology - Many small transactions (point queries: UPDATE or INSERT) - Avoid redundancy, normalize schemas - Access to consistent, up-to-date database - OLTP Examples: - Flight reservation - Banking and financial transactions - Order Management, Procurement, ... - Extremely fast response times... Carsten Binnig, ETH Zürich OLTP vs. OLAP - OLAP – Online Analytical Processing - Big aggregate queries, no Updates - Redundancy a necessity (Materialized Views, special-purpose indexes, de-normalized schemas) - Periodic refresh of data (daily or weekly) - OLAP Examples - Decision support (sales per employee) - Marketing (purchases per customer) - Biomedical databases - Goal: Response Time of seconds / few minutes Carsten Binnig, ETH Zürich ### OLTP vs. OLAP (Water and Oil) - **Lock Conflicts:** OLAP blocks OLTP - **Database design:** - OLTP normalized, OLAP de-normalized - **Tuning, Optimization** - OLTP: inter-query parallelism, heuristic optimization - OLAP: intra-query parallelism, full-fledged optimization - **Freshness of Data:** - OLTP: serializability - OLAP: reproducibility - **Integrity:** - OLTP: ACID - OLAP: Sampling, Confidence Intervals ### Solution: Data Warehouse - **Special Sandbox for OLAP** - **Data input using OLTP systems** - **Data Warehouse aggregates and replicates data** (special schema) - **New Data is periodically uploaded to Warehouse** ### DW Architecture ### What is data warehouse - **Information system for reporting purposes** - **The goal is to fulfill reporting needs which are unsatisfied in operational system** - It is easy to modify old and design new reports - No „write spec to software developer to get the report“ anymore - Reports can be filled with data quickly - No „start the report generation at night to prevent system load“ anymore - **The data comes from operational system(s)** ### Goal of the work package - **Work out the main concepts for building data warehouse for hospital IS** - What are the reporting needs? - What are the data cubes that cover most reporting needs for „universal“ hospital? - How to get the data into these cubes? ### Partners in this work package - **Ida-Tallinn Keskkaitla (ITK)** - One of the biggest hospitals in Estonia - Huge amount of data in operational system (system called ESTER) - Has difficulties in generating reports on operational system - Interested in improving the report management - **Quretec** - Provides data management software for different clients in Europe, especially in healthcare area - Interested in increasing the knowledge of data warehousing area So far... (1) - We have analyzed the data and data structures in operational system So far... (2) - We have designed the interface for getting the data from ESTER - We have built 2 data cubes So far... (3) - We have designed 10 reports on the data So far... (4) - Showed that report generation time has reduced from tens of minutes to few seconds <table> <thead> <tr> <th>Selected period</th> <th>Number of patients</th> <th>Seconds for generating report in operational system</th> <th>Seconds for generating the same report in data warehouse</th> </tr> </thead> <tbody> <tr> <td>1 day</td> <td>138</td> <td>149</td> <td>1</td> </tr> <tr> <td>1 month</td> <td>2944</td> <td>150</td> <td>1</td> </tr> <tr> <td>1 year</td> <td>32386</td> <td>164</td> <td>1</td> </tr> </tbody> </table> So far... (5) - We showed that data warehouse offers additional benefits: - Multiple output formats - Reports can be redesigned easily - New combined reports -> new value from the data Implementing a Warehouse - Monitoring: Sending data from sources - Integrating: Loading, cleansing,... - Processing: Query processing, indexing, ... - Managing: Metadata, Design, ... Monitoring - Source Types: relational, flat file, IMS, VSAM, IDMS, WWW, news-wire, ... - Incremental vs. Refresh <table> <thead> <tr> <th>customer id</th> <th>name</th> <th>address</th> <th>city</th> </tr> </thead> <tbody> <tr> <td>53</td> <td>joe</td> <td>10 main</td> <td>sfo</td> </tr> <tr> <td>51</td> <td>fred</td> <td>12 main</td> <td>sfo</td> </tr> <tr> <td>111</td> <td>sally</td> <td>80 willow</td> <td>la</td> </tr> </tbody> </table> Monitoring Techniques - Periodic snapshots - Database triggers - Log shipping - Data shipping (replication service) - Transaction shipping - Polling (queries to source) - Screen scraping - Application level monitoring Advantages & Disadvantages!! Monitoring Issues - Frequency - periodic: daily, weekly, ... - triggered: on “big” change, lots of changes, ... - Data transformation - convert data to uniform format - remove & add fields (e.g., add date to get history) - Standards (e.g., ODBC) - Gateways Integration - Data Cleaning - Data Loading - Derived Data Data Cleaning - Migration (e.g., yen ⇒ dollars) - Scrubbing: use domain-specific knowledge (e.g., social security numbers) - Fusion (e.g., mail list, customer merging) - Auditing: discover rules & relationships (like data mining) Loading Data - Incremental vs. refresh - Off-line vs. on-line - Frequency of loading - At night, 1x a week/month, continuously - Parallel/Partitioned load ### Derived Data - Derived Warehouse Data - indexes - aggregates - materialized views (next slide) - When to update derived data? - Incremental vs. refresh ### Materialized Views Define new warehouse relations using SQL expressions #### Table: Materialized Views <table> <thead> <tr> <th>sale</th> <th>prodId</th> <th>storeId</th> <th>date</th> <th>amt</th> </tr> </thead> <tbody> <tr> <td>p1</td> <td>e1</td> <td>1</td> <td>12</td> <td>62</td> </tr> <tr> <td>p2</td> <td>e1</td> <td>1</td> <td>11</td> <td>19</td> </tr> <tr> <td>p1</td> <td>e3</td> <td>1</td> <td>50</td> <td>54</td> </tr> <tr> <td>p2</td> <td>e2</td> <td>1</td> <td>8</td> <td>63</td> </tr> <tr> <td>p1</td> <td>e1</td> <td>2</td> <td>44</td> <td>120</td> </tr> <tr> <td>p1</td> <td>e3</td> <td>2</td> <td>4</td> <td>16</td> </tr> </tbody> </table> <table> <thead> <tr> <th>product</th> <th>id</th> <th>name</th> <th>price</th> </tr> </thead> <tbody> <tr> <td>p1</td> <td>bolt</td> <td>10</td> <td></td> </tr> <tr> <td>p2</td> <td>nut</td> <td>5</td> <td></td> </tr> </tbody> </table> ### Processing - ROLAP servers vs. MOLAP servers - Index Structures - What to Materialize? - Algorithms ### ROLAP Server - Relational OLAP Server ```sql <table> <thead> <tr> <th>product</th> <th>id</th> <th>name</th> <th>price</th> </tr> </thead> <tbody> <tr> <td>p1</td> <td>bolt</td> <td>10</td> <td></td> </tr> <tr> <td>p2</td> <td>nut</td> <td>5</td> <td></td> </tr> </tbody> </table> ``` ### MOLAP Server - Multi-Dimensional OLAP Server - Index Structures - Traditional Access Methods - B-trees, hash tables, R-trees, grids, ... - Popular in Warehouses - inverted lists - bit map indexes - join indexes - text indexes Inverted Lists - Query: - Get people with age = 20 and name = “fred” - List for age = 20: r4, r18, r34, r35 - List for name = “fred”: r18, r52 - Answer is intersection: r18 Using Inverted Lists Bit Maps - Query: - Get people with age = 20 and name = “fred” - List for age = 20: r4, r18, r34, r35 - List for name = “fred”: r18, r52 - Answer is intersection: r18 Using Bit Maps Join - “Combine” SALE, PRODUCT relations - In SQL: SELECT * FROM SALE, PRODUCT Join Indexes - Product - Join index What to Materialize? - Store in warehouse results useful for common queries - Example: <p>| | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>c1</td> <td>c2</td> <td>c3</td> <td></td> </tr> <tr> <td>p1</td> <td>44</td> <td>4</td> <td>50</td> </tr> <tr> <td>p2</td> <td>12</td> <td>50</td> <td></td> </tr> </tbody> </table> <p>| | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>c1</td> <td>c2</td> <td>c3</td> <td></td> </tr> <tr> <td>p1</td> <td>56</td> <td>4</td> <td>50</td> </tr> <tr> <td>p2</td> <td>11</td> <td>8</td> <td></td> </tr> </tbody> </table> <p>| | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>c1</td> <td></td> <td></td> <td></td> </tr> <tr> <td>p1</td> <td>110</td> <td></td> <td></td> </tr> <tr> <td>p2</td> <td>19</td> <td></td> <td></td> </tr> </tbody> </table> Materialization Factors - Type/frequency of queries - Query response time - Storage cost - Update cost Cube Aggregates Lattice Dimension Hierarchies - Use greedy algorithm to decide what to materialize Interesting Hierarchy - Conceptual dimension table Design - What data is needed? - Where does it come from? - How to clean data? - How to represent in warehouse (schema)? - What to summarize? - What to materialize? - What to index? Tools - Development - design & edit: schemas, views, scripts, rules, queries, reports - Planning & Analysis - what-if scenarios (schema changes, refresh rates), capacity planning - Warehouse Management - performance monitoring, usage patterns, exception reporting - System & Network Management - measure traffic (sources, warehouse, clients) - Workflow Management - “reliable scripts” for cleaning & analyzing data DW Products and Tools - Oracle 11g, IBM DB2, Microsoft SQL Server, ... - All provide OLAP extensions - SAP Business Information Warehouse - ERP vendors - MicroStrategy, Cognos (now IBM) - Specialized vendors - Kind of Web-based EXCEL - Niche Players (e.g., Btell) - Vertical application domain MDX (Multi-Dimensional eXpressions) - MDX is a Microsoft implementation of query language for OLAP Example SELECT {[Dim Date].[Time Year].[Time Year]} ON COLUMNS, {[Dim Location].[Region].[Region]} ON ROWS FROM [Mini DW] WHERE ([Measures].[Sales Amount]) <p>|</p> <table> <thead> <tr> <th>2007</th> <th>2008</th> </tr> </thead> <tbody> <tr> <td>Southeast</td> <td>324975.18</td> </tr> <tr> <td>West</td> <td>351101.35</td> </tr> </tbody> </table> Chapter 2: Data Preprocessing - Why preprocess the data? - Data cleaning - Data integration and transformation - Data reduction - Discretization and concept hierarchy generation - Summary Discretization - Three types of attributes: - Nominal — values from an unordered set, e.g., color, profession - Ordinal — values from an ordered set, e.g., military or academic rank - Continuous — real numbers, e.g., integer or real numbers - Discretization: - Divide the range of a continuous attribute into intervals - Some classification algorithms only accept categorical attributes. - Reduce data size by discretization - Prepare for further analysis Discretization and Concept Hierarchy - **Discretization** - Reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals - Interval labels can then be used to replace actual data values - Supervised vs. unsupervised - Split (top-down) vs. merge (bottom-up) - Discretization can be performed recursively on an attribute - **Concept hierarchy formation** - Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as young, middle-aged, or senior) Segmentation by Natural Partitioning - A simply 3-4-5 rule can be used to segment numeric data into relatively uniform, “natural” intervals. - If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equi-width intervals - If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals - If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals Example of 3-4-5 Rule <table> <thead> <tr> <th>Range</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>(-$400,000 .. 0)</td> <td>(-$400,000 .. -300,000)</td> </tr> <tr> <td>(0 .. 1,000,000)</td> <td>(0 .. 200,000)</td> </tr> <tr> <td>(1,000,000 .. 2,000,000)</td> <td>(1,000,000 .. 1,200,000)</td> </tr> <tr> <td>(2,000,000 .. 5,000,000)</td> <td>(2,000,000 .. 3,000,000)</td> </tr> </tbody> </table> Example - MIN=-351,976.00 - MAX=4,700,896.50 - LOW = 5th percentile -159,876 - HIGH = 95th percentile 1,838,761 - msd = 1,000,000 (most significant digit) - LOW = -1,000,000 (round down) HIGH = 2,000,000 (round up) 3 value ranges 1. (-1,000,000 .. 0] 2. (0 .. 1,000,000] 3. (1,000,000 .. 2,000,000] Adjust with real MIN and MAX 1. (-400,000 .. 0] 2. (0 .. 1,000,000] 3. (1,000,000 .. 2,000,000] 4. (2,000,000 .. 5,000,000] Concept Hierarchy Generation for Categorical Data - Specification of a partial/total ordering of attributes explicitly at the schema level by users or experts - street < city < state < country - Specification of a hierarchy for a set of values by explicit data grouping - {Urbana, Champaign, Chicago} < Illinois - Specification of only a partial set of attributes - E.g., only street < city, not others - Automatic generation of hierarchies (or attribute levels) by the analysis of the number of distinct values - E.g., for a set of attributes: {street, city, state, country} **Automatic Concept Hierarchy Generation** - Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set. - The attribute with the most distinct values is placed at the lowest level of the hierarchy. - Exceptions, e.g., weekday, month, quarter, year. (country: 15 distinct values) (province_or_state: 365 distinct values) (city: 3567 distinct values) (street: 674,339 distinct values) --- **Summary** - OLAP and DW – a way to summarise data - Prepare data for further data mining and visualisation - Fact table, aggregation, queries\&indexes, ... --- **Reference (highly recommended)** - [http://citeseer.ist.psu.edu/old/392672.html](http://citeseer.ist.psu.edu/old/392672.html) - Data Warehousing chapter of Jianwei Han’s textbook (chapter 3) --- **Homework** - Exercises 1 and 4 at: [http://www.systems.ethz.ch/education/courses/fs09/data-warehousing/ex2.pdf](http://www.systems.ethz.ch/education/courses/fs09/data-warehousing/ex2.pdf) - Multidimensional data modeling exercise in course Wiki pages
{"Source-Url": "https://courses.cs.ut.ee/2011/DM/uploads/Main/05_OLAP_6up.pdf", "len_cl100k_base": 8363, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 53036, "total-output-tokens": 8066, "length": "2e13", "weborganizer": {"__label__adult": 0.0004858970642089844, "__label__art_design": 0.0012598037719726562, "__label__crime_law": 0.0008077621459960938, "__label__education_jobs": 0.0225830078125, "__label__entertainment": 0.000133514404296875, "__label__fashion_beauty": 0.00028896331787109375, "__label__finance_business": 0.004375457763671875, "__label__food_dining": 0.0007338523864746094, "__label__games": 0.0006999969482421875, "__label__hardware": 0.0014467239379882812, "__label__health": 0.0010766983032226562, "__label__history": 0.0008215904235839844, "__label__home_hobbies": 0.0003998279571533203, "__label__industrial": 0.003452301025390625, "__label__literature": 0.0005564689636230469, "__label__politics": 0.000370025634765625, "__label__religion": 0.0005040168762207031, "__label__science_tech": 0.387939453125, "__label__social_life": 0.0002849102020263672, "__label__software": 0.07977294921875, "__label__software_dev": 0.490478515625, "__label__sports_fitness": 0.0002884864807128906, "__label__transportation": 0.0009312629699707032, "__label__travel": 0.0003554821014404297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23036, 0.0419]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23036, 0.12377]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23036, 0.69918]], "google_gemma-3-12b-it_contains_pii": [[0, 1927, false], [1927, 3638, null], [3638, 5286, null], [5286, 6558, null], [6558, 7025, null], [7025, 8343, null], [8343, 10214, null], [10214, 12089, null], [12089, 13576, null], [13576, 14886, null], [14886, 16156, null], [16156, 16672, null], [16672, 17336, null], [17336, 19333, null], [19333, 21787, null], [21787, 23036, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1927, true], [1927, 3638, null], [3638, 5286, null], [5286, 6558, null], [6558, 7025, null], [7025, 8343, null], [8343, 10214, null], [10214, 12089, null], [12089, 13576, null], [13576, 14886, null], [14886, 16156, null], [16156, 16672, null], [16672, 17336, null], [17336, 19333, null], [19333, 21787, null], [21787, 23036, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23036, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23036, null]], "pdf_page_numbers": [[0, 1927, 1], [1927, 3638, 2], [3638, 5286, 3], [5286, 6558, 4], [6558, 7025, 5], [7025, 8343, 6], [8343, 10214, 7], [10214, 12089, 8], [12089, 13576, 9], [13576, 14886, 10], [14886, 16156, 11], [16156, 16672, 12], [16672, 17336, 13], [17336, 19333, 14], [19333, 21787, 15], [21787, 23036, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23036, 0.2116]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3d4807866f515bc5b73605e57c8e139b596735ae
Lecture 18: Runtime Verification, Review & Wrapup 2016-07-18 Prof. Dr. Andreas Podelski, Dr. Bernd Westphal Albert-Ludwigs-Universität Freiburg, Germany Topic Area Code Quality Assurance: Content - Introduction and Vocabulary - Limits of Software Testing - Glass-Box Testing - Statement-, branch-, term-coverage. - Other Approaches - Model-based testing. - Runtime verification. - Software quality assurance in a larger scope. - Program Verification - partial and total correctness, - Proof System PD. - Runtime Verification - Review - Code QA: Discussion Content - Runtime-Verification - Idea - Assertions - LSC-Observers - Reviews - Roles and artefacts - Review procedure - Stronger and weaker variants - Do’s and Don’ts in Code QA - Code QA Techniques Revisited - Test - Runtime-Verification - Review - Static Checking - Formal Verification - Dependability Recall: Three Basic Directions all computation paths satisfying the specification expected outcomes $Soll$ $(\Sigma \times A)^\omega$ defines $\subseteq ?$ execution of $(In, Soll)$ $\in ?$ Reviewer review prove $S \models \mathcal{I}$, conclude $[S] \in [\mathcal{I}]$ Testing Review Formal Verification input $\rightarrow$ output Run-Time Verification Run-Time Verification: Idea - Assume, there is a function \( f \) in software \( S \) with the following specification: - **pre-condition**: \( p \), **post-condition**: \( q \). - Computation paths of \( S \) may look like this: \[ \sigma_0 \xrightarrow{\alpha_1} \sigma_1 \xrightarrow{\alpha_2} \sigma_2 \cdots \xrightarrow{\alpha_{n-1}} \sigma_n \xrightarrow{\text{call } f} \sigma_{n+1} \cdots \sigma_m \xrightarrow{\text{f returns}} \sigma_{m+1} \cdots \] - Assume there are functions \( \text{check}_p \) and \( \text{check}_q \), which check whether \( p \) and \( q \) hold at the current program state, and which do not modify the program state (except for program counter). - **Idea**: create software \( S' \) by (i) extending \( S \) by implementations of \( \text{check}_p \) and \( \text{check}_q \), (ii) call \( \text{check}_p \) right after entering \( f \), (iii) call \( \text{check}_q \) right before returning from \( f \). - For \( S' \), obtain computation paths like: \[ \sigma_0 \xrightarrow{\alpha_1} \sigma_1 \xrightarrow{\alpha_2} \sigma_2 \cdots \xrightarrow{\alpha_{n-1}} \sigma_n \xrightarrow{\text{call } f} \sigma_{n+1} \xrightarrow{\text{check}_p} \sigma'_{n+1} \cdots \sigma_m \xrightarrow{\text{check}_q} \sigma'_m \xrightarrow{\text{f returns}} \sigma_{m+1} \cdots \] - If \( \text{check}_p \) and \( \text{check}_q \) notify us of violations of \( p \) or \( q \), then we are notified of \( f \) violating its specification when running \( S' \) (= at run-time). Run-Time Verification: Example ```c int main() { while (true) { int x = read_number(); int y = read_number(); int sum = add(x, y); verify_sum(x, y, sum); display(sum); } } void verify_sum(int x, int y, int sum) { if (sum != (x+y) || (x + y > 99999999 && !(sum < 0))) { fprintf(stderr, " verify_sum: error "); abort(); } } ``` A Very Useful Special Case: Assertions - Maybe the simplest instance of runtime verification: **Assertions**. - Available in standard libraries of many programming languages (C, C++, Java, …). - For example, the C standard library manual reads: ``` 1 ASSERT(3) Linux Programmer's Manual ASSERT(3) 2 3 NAME 4 assert — abort the program if assertion is false 5 6 SYNOPSIS 7 #include <assert.h> 8 9 void assert(scalar expression); 10 11 DESCRIPTION 12 [...] the macro assert() prints an error message to standard error and terminates the program by calling abort(3) if expression is false (i.e., compares equal to zero). 13 14 The purpose of this macro is to help the programmer find bugs in his program. The message "assertion failed in file foo.c, function do_bar(), line 1287" is of no help at all to a user. ``` - In C code, **assert** can be **disabled** in **production code** (`-D NDEBUG`). The abstract example from run-time verification: ```c void f(...) { assert(p); ... assert(q); } ``` Compute the width of a progress bar: ```c int progress_bar_width(int progress, int window_left, int window_right) { assert(window_left <= window_right); /* precondition */ ... /* treat special cases 0 and 100 */ ... assert(0 < progress && progress < 100); // extremal cases already treated ... assert(window_left <= r && r <= window_right); /* post-condition */ return r; } ``` Recall the **structure model** with Proto-OCL constraint from Exercise Sheet 4. Assume, we add a method `set_key()` to class **TreeNode**: ```java class TreeNode { private int key; TreeNode parent, leftChild, rightChild; public int get_key() { return key; } public void set_key(int new_key) { key = new_key; } } ``` We can check consistency with the Proto-OCL constraint at runtime by using assertions: ```java public void set_key(int new_key) { assert (parent == null || parent.get_key() <= new_key); assert (leftChild == null || new_key <= leftChild.get_key()); assert (rightChild == null || new_key <= rightChild.get_key()); key = new_key; } ``` Use `java -ea ...` to **enable assertion checking** (disabled by default). (cf. https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html) More Complex Run-Time Verification: LSC Observers ChoicePanel: - `WATER?` transition leads to `water_selected` node. - `SOFT?` transition leads to `soft_selected` node. - `TEA?` transition leads to `tea_selected` node. - `idle` state. - `half_idle` state. - `request_sent` state. Transition from `water_enabled` to `false`, `soft_enabled` to `false`, `tea_enabled` to `false`. ``` st : { idle, wsel, ssel, tsel, reqs, half ]; take_event( E : { TAU, WATER, SOFT, TEA, ... } ) { bool stable = 1; switch (st) { case idle : switch (E) { case WATER : if (water_enabled) { st := wsel; stable := 0; } ; case SOFT : ; ; case wsel: switch (E) { case TAU : send_DWATER(); st := reqs; hey Observer I just sent DWATER(); ; } } } } ``` Run-Time Verification: Discussion - **Experience:** During development, assertions for pre/post conditions and intermediate invariants are an extremely powerful tool with a very attractive gain/effort ratio (low effort, high gain). - Assertions effectively work as safe-guard against unexpected use of functions and regression, e.g. during later maintenance or efficiency improvement. - Can serve as formal (support of) documentation: “Dear reader, at this point in the program, I expect condition expr to hold, because…” - **Development- vs. Release Versions:** - Common practice: - development version with run-time verification enabled (cf. assert(3)), - release version without run-time verification. If run-time verification is enabled in a release version, - software should terminate as gracefully as possible (e.g. try to save data), - save information from assertion failure if possible for future analysis. **Risk:** with bad luck, the software only behaves well because of the run-time verification code… Then disabling run-time verification “breaks” the software. Yet very complex run-time verification may significantly slow down the software, so needs to be disabled… Content - Runtime-Verification - Idea - Assertions - LSC-Observers - Reviews - Roles and artefacts - Review procedure - Stronger and weaker variants - Do’s and Don’ts in Code QA - Code QA Techniques Revisited - Test - Runtime-Verification - Review - Static Checking - Formal Verification - Dependability Review • **Input to Review Session:** • **Review item**: can be every closed, human-readable part of software (documentation, module, test data, installation manual, etc.) **Social aspect**: it is an **artefact** which is examined, not the human (who created it). • **Reference documents**: need to enable an assessment (requirements specification, guidelines (e.g. coding conventions), catalogue of questions (“all variables initialised?”), etc.) • **Roles:** **Moderator**: leads session, responsible for properly conducted procedure. **Author**: (representative of the) creator(s) of the artefact under review; is present to listen to the discussions; can answer questions; does not speak up if not asked. **Reviewer(s)**: person who is able to judge the artefact under review; maybe different reviewers for different aspects (programming, tool usage, etc.), at best experienced in detecting inconsistencies or incompleteness. **Transcript Writer**: keeps minutes of review session, can be assumed by author. • The **review team** consists of everybody but the author(s). Review Procedure Over Time **planning**: reviews need time in the project plan. - A review is triggered, e.g., by a submission to the revision control system: the moderator invites (include review item in invitation), and states review missions. **preparation**: reviewers investigate review item. - Reviewers investigate review item. **review session**: reviewers report, evaluate, and document issues; resolve open questions. - Reviewers report, evaluate, and document issues; resolve open questions. **postparation**: rework review item; responsibility of the author(s). - Reviewers may state proposals for solutions or improvements. **analysis**: improve development and review process. - Analysis: improve development and review process. - reviewers re-assess reworked review item (until approval is declared). Review Rules *(Ludewig and Lichter, 2013)* (i) The **moderator** organises the review, issues invitations, supervises the review session. (ii) The **moderator** may terminate the review if conduction is not possible, e.g., due to inputs, preparation, or people missing. (iii) The review session is **limited to 2 hours**. If needed: organise more sessions. (iv) The **review item** is under review, not the author(s). Reviewers choose their words accordingly. Authors neither defend themselves nor the review item. (v) Roles are **not mixed up**, e.g., the moderator does not act as reviewer. (Exception: author may write transcript.) (vi) **Style** issues (outside fixed conventions) are **not discussed**. (vii) The **review team** is **not** supposed to **develop solutions**. Issues are **not** noted down in form of **tasks** for the **author(s)**. (viii) Each **reviewer** gets the opportunity to present her/his findings appropriately. (ix) **Reviewers** need to reach **consensus** on issues, consensus is noted down. (x) **Issues** are classified as: - critical (review unusable for purpose), - major (usability severely affected), - minor (usability hardly affected), - good (no problem). (xi) The **review team** declares: - accept **without changes**, - accept **with changes**, - do not accept. (xii) The **protocol** is signed by all participants. Stronger and Weaker Review Variants - **Design and Code Inspection** (Fagan, 1976, 1986) - deluxe variant of review, - approx. 50% more time, approx. 50% more errors found. - **Review** - **Structured Walkthrough** - simple variant of review: - developer moderates walkthrough-session, - developer presents artefact(s), - reviewer poses (prepared or spontaneous) questions, - issues are noted down, - **Variation point**: do reviewers see the artefact before the session? - less effort, less effective. - → disadvantages: unclear responsibilities; “salesman”-developer may trick reviewers. - **Comment** (‘Stellungnahme’) - colleague(s) of developer read artefacts, - developer considers feedback. - → advantage: low organisational effort; - → disadvantages: choice of colleagues may be biased; no protocol; consideration of comments at discretion of developer. - **Careful Reading** (‘Durchsicht’) - done by developer, - recommendation: “away from screen” (use print-out or different device and situation) Some Final, General Guidelines **Do’s and Don’ts in Code Quality Assurance** **Avoid** using special *examination versions* for examination. (Test-harness, stubs, etc. *may have errors* which may cause false positives and (!) negatives.) **Avoid** to stop examination when the first error is detected. **Clear**: Examination should be aborted if the examined program is not executable at all. **Do not modify** the artefact under examination *during* examination. - otherwise, it is **unclear what exactly** has been examined (“moving target”), (examination results need to be uniquely traceable to one artefact version.) - fundamental flaws are sometimes **easier to detect** with a *complete picture* of unsuccessful/successful tests, - changes are particularly **error-prone**, should not happen “en passant” in examination, - fixing flaws during examination may cause them to **go uncounted** in the statistics (which we need for all kinds of estimation), - roles developer and examinor are different anyway: an examinor fixing flaws would **violate the role assignment**. **Do not switch** (fine grained) between examination and debugging. Code Quality Assurance Techniques Revisited Techniques Revisited <table> <thead> <tr> <th></th> <th>automatic</th> <th>prove “can run”</th> <th>toolchain considered</th> <th>exhaustive</th> <th>prove correct</th> <th>partial results</th> <th>entry cost</th> </tr> </thead> <tbody> <tr> <td>Test</td> <td>(✔)</td> <td>✔</td> <td>✔</td> <td>✘</td> <td>✘</td> <td>✔</td> <td>✔</td> </tr> <tr> <td>Runtime-Verification</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Review</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Static Checking</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Verification</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Strengths: - can be fully automatic (yet not easy for GUI programs); - negative test proves “program not completely broken”, “can run” (or positive scenarios); - final product is examined, thus toolchain and platform considered; - one can stop at any time and take partial results; - few, simple test cases are usually easy to obtain; - provides reproducible counter-examples (good starting point for repair). Weaknesses: - (in most cases) vastly incomplete, thus no proofs of correctness; - creating test cases for complex functions (or complex conditions) can be difficult; - maintenance of many, complex test cases be challenging. - executing many tests may need substantial time (but: can sometimes be run in parallel); ## Techniques Revisited <table> <thead> <tr> <th></th> <th>automatic</th> <th>prove “can run”</th> <th>toolchain considered</th> <th>exhaustive</th> <th>prove correct</th> <th>partial results</th> <th>entry cost</th> </tr> </thead> <tbody> <tr> <td><strong>Test</strong></td> <td>✔</td> <td>✔</td> <td>✔</td> <td>✘</td> <td>✘</td> <td>✔</td> <td>✔</td> </tr> <tr> <td><strong>Runtime-Verification</strong></td> <td>✗</td> <td>✔</td> <td>✔</td> <td>(✘)</td> <td>✘</td> <td>✔</td> <td>✔</td> </tr> <tr> <td><strong>Review</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>✔</td> </tr> <tr> <td><strong>Static Checking</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>✔</td> </tr> <tr> <td><strong>Verification</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>✔</td> </tr> </tbody> </table> ### Strengths: - **fully automatic** (once observers are in place); - **provides counter-example**; - (nearly) **final product is examined**, thus toolchain and platform considered; - one can stop at any time and take **partial results**; - **assert-statements have a very good effort/effect ratio**. ### Weaknesses: - counter-examples **not necessarily reproducible**; - may negatively affect **performance**; - code is changed, program may only run **because of** the observers; - completeness depends on usage, may also be **vastly incomplete**, so no correctness proofs; - constructing observers for complex properties may be **difficult**, one needs to learn how to construct observers. Techniques Revisited <table> <thead> <tr> <th></th> <th>automatic</th> <th>prove “can run”</th> <th>toolchain considered</th> <th>exhaustive</th> <th>prove correct</th> <th>partial results</th> <th>entry cost</th> </tr> </thead> <tbody> <tr> <td>Test</td> <td>✔</td> <td>✔</td> <td>✔</td> <td>x</td> <td>x</td> <td>✔</td> <td>✔</td> </tr> <tr> <td>Runtime-Verification</td> <td>✔</td> <td>(✔)</td> <td>✔</td> <td>(x)</td> <td>x</td> <td>✔</td> <td>(✔)</td> </tr> <tr> <td>Review</td> <td>x</td> <td>x</td> <td>x</td> <td>✔</td> <td>(✔)</td> <td>✔</td> <td>(✔)</td> </tr> <tr> <td>Static Checking</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Verification</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Strengths: - human readers can understand the code, may spot point errors; - reported to be highly effective; - one can stop at any time and take partial results; - intermediate entry costs; good effort/effect ratio achievable. Weaknesses: - no tool support; - no results on actual execution, toolchain not reviewed; - human readers may overlook errors; usually not aiming at proofs. - does (in general) not provide counter-examples, developers may deny existence of error. ## Techniques Revisited <table> <thead> <tr> <th></th> <th>automatic</th> <th>prove “can run”</th> <th>toolchain considered</th> <th>exhaustive</th> <th>prove correct</th> <th>partial results</th> <th>entry cost</th> </tr> </thead> <tbody> <tr> <td><strong>Test</strong></td> <td>✔</td> <td>✔</td> <td>✔</td> <td>✗</td> <td>✗</td> <td>✔</td> <td>✔</td> </tr> <tr> <td><strong>Runtime-Verification</strong></td> <td>✔</td> <td>( ✔ )</td> <td>✔</td> <td>✗</td> <td>✗</td> <td>✔</td> <td>( ✔ )</td> </tr> <tr> <td><strong>Review</strong></td> <td>✗</td> <td>✗</td> <td>✗</td> <td>( ✔ )</td> <td>( ✔ )</td> <td>✔</td> <td>( ✔ )</td> </tr> <tr> <td><strong>Static Checking</strong></td> <td>✔</td> <td>( ✗ )</td> <td>✗</td> <td>✔</td> <td>( ✔ )</td> <td>✔</td> <td>( ✗ )</td> </tr> <tr> <td><strong>Verification</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ### Strengths: - there are (commercial), **fully automatic** tools (lint, Coverity, Polyspace, etc.); - some tools are **complete** (relative to assumptions on language semantics, platform, etc.); - can be **faster than testing**; - one can stop at any time and take **partial results**. ### Weaknesses: - no results on actual execution, **toolchain not reviewed**; - can be very **resource consuming** (if few false positives wanted), e.g., code may need to be “designed for static analysis”. - many false positives can be very **annoying to developers** (if fast checks wanted); - distinguish **false from true positives** can be challenging; - configuring the **tools** (to limit false positives) can be challenging. Techniques Revisited <table> <thead> <tr> <th></th> <th>automatic</th> <th>prove “can run”</th> <th>toolchain considered</th> <th>exhaustive</th> <th>prove correct</th> <th>partial results</th> <th>entry cost</th> </tr> </thead> <tbody> <tr> <td>Test</td> <td>✔️</td> <td>✔️</td> <td>✔️</td> <td>✘</td> <td>✘</td> <td>✔</td> <td>✔️</td> </tr> <tr> <td>Runtime-Verification</td> <td>✔️</td> <td>(✔️)</td> <td>✔️</td> <td>(✘)</td> <td>✘</td> <td>✔</td> <td>(✔️)</td> </tr> <tr> <td>Review</td> <td>✘</td> <td>✘</td> <td>✘</td> <td>✔️</td> <td>(✔️)</td> <td>✔</td> <td>(✔️)</td> </tr> <tr> <td>Static Checking</td> <td>✔️</td> <td>(✘)</td> <td>✘</td> <td>✔️</td> <td>(✔️)</td> <td>✔</td> <td>(✘)</td> </tr> <tr> <td>Verification</td> <td>(✔️)</td> <td>✘</td> <td>✘</td> <td>✔️</td> <td>✔</td> <td>(✘)</td> <td>✘</td> </tr> </tbody> </table> Strengths: - some **tool support** available (few commercial tools); - **complete** (relative to assumptions on language semantics, platform, etc.); - thus can provide **correctness proofs**; - can prove correctness for **multiple language semantics and platforms** at a time; - can be **more efficient than other techniques**. Weaknesses: - no results on actual execution, **toolchain not reviewed**; - not many **intermediate results**: “half of a proof” may not allow any useful conclusions; - **entry cost high**: significant training is useful to know how to deal with tool limitations; - proving things is challenging; failing to find a proof does not allow any useful conclusion; - **false negatives** (broken program “proved” correct) hard to detect. Quality Assurance — Concluding Discussion Proposal: Dependability Cases (Jackson, 2009) - A **dependable** system is one you can **depend** on – that is, you can place your trust in it. “Developers [should] express the critical properties and make an explicit argument that the system satisfies them.” **Quality assurance** – (1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. **Proposed Approach:** - Identify the **critical requirements**, and determine what **level of confidence** is needed. Most systems do also have **non-critical** requirements. - Construct a **dependability case**: - an argument, that the software, in concert with other components, establishes the critical properties. - The case should be - **auditable**: can (easily) be evaluated by third-party certifier. - **complete**: no holes in the argument, any assumptions that are not justified should be noted (e.g. assumptions on compiler, on protocol obeyed by users, etc.) - **sound**: e.g. should not claim full correctness [...] based on nonexhaustive testing; should not make unwarranted assumptions on independence of component failures; etc. Still, it seems like computer systems more or less inevitably have errors. Then why... - ... do modern planes fly at all? (i) very careful development, (ii) very thorough analysis, (iii) strong regulatory obligations. **Plus:** classical engineering wisdom for high reliability, like redundancy. - ... do modern cars drive at all? (i) careful development, (ii) thorough analysis, (iii) regulatory obligations. **Plus:** classical engineering wisdom for high reliability, like monitoring. Tell Them What You’ve Told Them... - **Runtime Verification** - (as the name suggests) checks properties at **program run-time**, - a good **pinch of assert's** can be a valuable safe-guard against - **regressions**, - usage **outside specification**, - etc. and serve as **formal documentation** of assumptions. - **Review** (structured examination of artefacts by humans) - (mild variant) advocated in the XP approach, - **not uncommon**: lead programmer reviews all **commits** from team members, - literature reports good effort/effect ratio achievable. - **All approaches to code quality assurance** have their - **advantages** and **drawbacks**. - Which to use? It depends! - **Dependability Cases** - an (auditable, complete, sound) argument, that a software has the **critical properties**. References References Looking Back: 18 Lectures on Software Engineering Contents of the Course - **Introduction** - L 1: 18.4., Mon - L 2: 21.4., Thu - L 3: 25.4., Mon - **Scales, Metrics, Costs** - T 1: 28.4., Thu - **Development** - L 4: 2.5., Mon - - 5.5., Thu - **Process** - L 5: 9.5., Mon - - 12.5., Thu - - 16.5., Mon - - 19.5., Thu - T 2: 23.5., Mon - - 26.5., Thu - **Requirements Engineering** - L 7: 30.5., Mon - L 8: 2.6., Thu - L 9: 6.6., Mon - T 3: 9.6., Thu - **Architecture & Design** - L10: 13.6., Mon - L 11: 16.6., Thu - L12: 20.6., Mon - T 4: 23.6., Thu - **Software Modelling** - L13: 27.6., Mon - L14: 30.6., Thu - L15: 4.7., Mon - T 5: 7.7., Thu - **Quality Assurance (Testing, Formal Verification)** - L16: 11.7., Mon - L17: 14.7., Thu - L18: 18.7., Mon - **Wrap-Up** - L19: 21.7., Thu ## Expectations - none, because mandatory course - **overall** - ✓ well-structured lectures - ✓ (✓) praxis oriented - ✗ practical knowledge about planning, designing and testing software - ✓ improve skills in scientific work - ✗ (✓) more about scientific methods - **other courses** - ✗ more on how courses are linked together - ✗ skills we need to organise SoPra - ✓ maybe transfer knowledge in SoPra ### “real world” - ✓ vocabulary and methods in professional software development - ✓ learn how things work in a company, to easier integrate into teams, e.g., communication - **kinds of software** - ✓ embedded systems and software - ✗ how to combine HW and SW parts --- <table> <thead> <tr> <th>Course</th> <th>Lectures</th> </tr> </thead> <tbody> <tr> <td>Introduction</td> <td>18.4., Mon</td> </tr> <tr> <td>Scales, Metrics, Costs</td> <td>21.4., Thu</td> </tr> <tr> <td>Development</td> <td>25.4., Mon</td> </tr> <tr> <td>Process</td> <td>28.4., Thu</td> </tr> <tr> <td>Requirements Engineering</td> <td>2.5., Mon</td> </tr> <tr> <td>Process</td> <td>5.5., Thu</td> </tr> <tr> <td>Requirements Engineering</td> <td>9.5., Mon</td> </tr> <tr> <td>Process</td> <td>12.5., Thu</td> </tr> <tr> <td>Process</td> <td>16.5., Mon</td> </tr> <tr> <td>Requirements Engineering</td> <td>19.5., Thu</td> </tr> <tr> <td>Architecture &amp; Design</td> <td>23.5., Mon</td> </tr> <tr> <td>Architecture &amp; Design</td> <td>26.5., Thu</td> </tr> <tr> <td>Architecture &amp; Design</td> <td>30.5., Mon</td> </tr> <tr> <td>Requirements Engineering</td> <td>2.6., Thu</td> </tr> <tr> <td>Architecture &amp; Design</td> <td>6.6., Mon</td> </tr> <tr> <td>Architecture &amp; Design</td> <td>9.6., Thu</td> </tr> <tr> <td>Software Mondelling</td> <td>13.6., Mon</td> </tr> <tr> <td>Software Mondelling</td> <td>16.6., Thu</td> </tr> <tr> <td>Software Mondelling</td> <td>20.6., Mon</td> </tr> <tr> <td>Software Mondelling</td> <td>23.6., Thu</td> </tr> <tr> <td>Software Mondelling</td> <td>27.6., Mon</td> </tr> <tr> <td>Software Mondelling</td> <td>30.6., Thu</td> </tr> <tr> <td>Software Mondelling</td> <td>4.7., Mon</td> </tr> <tr> <td>Quality Assurance (Testing, Formal Verification)</td> <td>7.7., Thu</td> </tr> <tr> <td>Quality Assurance (Testing, Formal Verification)</td> <td>11.7., Mon</td> </tr> <tr> <td>Quality Assurance (Testing, Formal Verification)</td> <td>14.7., Thu</td> </tr> <tr> <td>Quality Assurance (Testing, Formal Verification)</td> <td>18.7., Mon</td> </tr> <tr> <td>Wrap-Up</td> <td>21.7., Thu</td> </tr> </tbody> </table> Expectations Cont’d - **software development** - ✓ understand how software development practically works - ✓ developing, maintaining software at bigger scale - ✓ aspects of software development - **software project management** - ✓ learn what is important to plan - ✓ how to structure the process of a project - ✓ how to keep control of project, measure success - ✗ which projects need full-time project manager - ✗ which kind of documentation is really necessary - ✗ want to get better in leading a team; how to lead team of engineers - **cost estimation** - ✓ how to estimate time and effort - ✗ formal methods for better planning of projects - ✗ tools which help planning - **quality** - ✓ learn ways how to judge quality based on the requirements - ✓ avoid mistakes during software development - ✓ make better programs, or make programs more efficiently ### Expectations Cont’d - **requirements** - ✔ formal ways to specify requirements - ✔ learn techniques to reduce misunderstandings - ✔ understand types of requirements - ✔ learn how requirements are to be stated - ✔ how to create requirements/specification document - **design** - ✔ techniques for design - ✔ predict potential risks and crucial design errors - ✗ come up with good design, learn how to design - ✗ practical knowledge on application of design patterns - ✗ how to structure, compose components, how to define interfaces - ✗ standards for keeping parts of project compatible - ✗ how to guarantee a particular reliability - **Implementation** - ✔ modular programming, better documentation of big projects - ✗ more of computers and programming, write faster better programs - ✗ strengths and weaknesses of standards, training in their application - ✗ improve coding skills - ✗ how to increase (software) performance ### Expectations Cont’d - **code quality assurance** - ✓ methods for testing to guarantee high level of quality - ✓ formal methods like program verification - ✘ learn about practical implementation of these tools - **extra information** - “will work as teacher” - “want to work on medical software” - “want to work in automotive industry” - “worked as software-engineer” #### Schedule <table> <thead> <tr> <th>Course</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>Introduction</td> <td>L 1: 18.4., Mon</td> </tr> <tr> <td></td> <td>T 1: 28.4., Thu</td> </tr> <tr> <td>Scales, Metrics, Costs</td> <td>L 2: 21.4., Thu</td> </tr> <tr> <td></td> <td>L 3: 25.4., Mon</td> </tr> <tr> <td>Development</td> <td>L 4: 2.5., Mon</td> </tr> <tr> <td></td> <td>L 5: 9.5., Mon</td> </tr> <tr> <td></td> <td>L 6: 12.5., Thu</td> </tr> <tr> <td></td> <td>L 7: 26.5., Thu</td> </tr> <tr> <td></td> <td>L 8: 30.5., Mon</td> </tr> <tr> <td>Process</td> <td>T 2: 23.5., Mon</td> </tr> <tr> <td></td> <td>L 9: 6.6., Mon</td> </tr> <tr> <td></td> <td>T 3: 9.6., Thu</td> </tr> <tr> <td>Requirements Engineering</td> <td>L 10: 13.6., Mon</td> </tr> <tr> <td></td> <td>L 11: 16.6., Thu</td> </tr> <tr> <td></td> <td>L 12: 20.6., Mon</td> </tr> <tr> <td></td> <td>T 4: 23.6., Thu</td> </tr> <tr> <td>Architecture &amp; Design</td> <td>L 13: 27.6., Mon</td> </tr> <tr> <td></td> <td>L 14: 30.6., Thu</td> </tr> <tr> <td></td> <td>L 15: 4.7., Mon</td> </tr> <tr> <td></td> <td>T 5: 7.7., Thu</td> </tr> <tr> <td>Software Modelling</td> <td>L 16: 11.7., Mon</td> </tr> <tr> <td>Quality Assurance</td> <td>L 17: 14.7., Thu</td> </tr> <tr> <td>(Testing, Formal</td> <td>L 18: 18.7., Mon</td> </tr> <tr> <td>Verification)</td> <td>L 19: 21.7., Thu</td> </tr> <tr> <td>Wrap-Up</td> <td></td> </tr> </tbody> </table> That’s Today’s Software Engineering — More or Less... Coming Soon to Your Local Lecture Hall... Course Software-Engineering vs. Other Courses BSc / MSc projects & theses Software Design/Modelling/Analysis in UML CPS I: Discrete CPS II: Hybrid Real-Time Systems Seminar: Program Analysis / SW Testing Programm Verification Decision Procedures Seminar: Automata Theory Quality Assurance Networks Tech. Info Project Management Requirements Engineering Design, SW Modelling Vocabulary Techniques informal Vocabulary Techniques informal Vocabulary Techniques informal Vocabulary Techniques informal Vocabulary Techniques informal Vocabulary Techniques informal Optimisation Logic Graph Theory Maths I Maths II Info I Info II Info III Software Design/Engineering vs. Other Courses - 18 – 2016-07-18 – Ssoon – Networks Tech. Info Logic Graph Theory Maths I Maths II BSc / MSc projects & theses Software Design/Modelling/Analysis in UML CPS I: Discrete CPS II: Hybrid Real-Time Systems Seminar: Program Analysis / SW Testing Programm Verification Decision Procedures Seminar: Automata Theory Quality Assurance Networks Tech. Info Optimisation Logic Graph Theory Maths I Maths II Info I Info II Info III Thursday, 2016-07-21, 1200 to 1400: Plenary Tutorial 6 & Questions Session in 101-0-026 (right here)
{"Source-Url": "http://swt.informatik.uni-freiburg.de/teaching/SS2016/swtvl/Resources/Slides/lecture-20160718-1-annot-fix.pdf", "len_cl100k_base": 9223, "olmocr-version": "0.1.49", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 85391, "total-output-tokens": 10474, "length": "2e13", "weborganizer": {"__label__adult": 0.0005474090576171875, "__label__art_design": 0.0004949569702148438, "__label__crime_law": 0.0003647804260253906, "__label__education_jobs": 0.00885009765625, "__label__entertainment": 8.195638656616211e-05, "__label__fashion_beauty": 0.0002282857894897461, "__label__finance_business": 0.00027251243591308594, "__label__food_dining": 0.0005517005920410156, "__label__games": 0.0008020401000976562, "__label__hardware": 0.0007138252258300781, "__label__health": 0.0004243850708007813, "__label__history": 0.0002608299255371094, "__label__home_hobbies": 0.00014531612396240234, "__label__industrial": 0.0004830360412597656, "__label__literature": 0.0003843307495117187, "__label__politics": 0.00031757354736328125, "__label__religion": 0.00067901611328125, "__label__science_tech": 0.003192901611328125, "__label__social_life": 0.0001856088638305664, "__label__software": 0.003509521484375, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.0005311965942382812, "__label__transportation": 0.0007586479187011719, "__label__travel": 0.00033926963806152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33467, 0.02674]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33467, 0.15702]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33467, 0.77183]], "google_gemma-3-12b-it_contains_pii": [[0, 156, false], [156, 570, null], [570, 901, null], [901, 1247, null], [1247, 1269, null], [1269, 2807, null], [2807, 3217, null], [3217, 4127, null], [4127, 4652, null], [4652, 5510, null], [5510, 6379, null], [6379, 7591, null], [7591, 7922, null], [7922, 7929, null], [7929, 9028, null], [9028, 9855, null], [9855, 11233, null], [11233, 12308, null], [12308, 12339, null], [12339, 13463, null], [13463, 13463, null], [13463, 13507, null], [13507, 15175, null], [15175, 16840, null], [16840, 18301, null], [18301, 19973, null], [19973, 21693, null], [21693, 21735, null], [21735, 22947, null], [22947, 23472, null], [23472, 24312, null], [24312, 24323, null], [24323, 25185, null], [25185, 25235, null], [25235, 26030, null], [26030, 26030, null], [26030, 28565, null], [28565, 29457, null], [29457, 30422, null], [30422, 32071, null], [32071, 32125, null], [32125, 32125, null], [32125, 32167, null], [32167, 33365, null], [33365, 33467, null]], "google_gemma-3-12b-it_is_public_document": [[0, 156, true], [156, 570, null], [570, 901, null], [901, 1247, null], [1247, 1269, null], [1269, 2807, null], [2807, 3217, null], [3217, 4127, null], [4127, 4652, null], [4652, 5510, null], [5510, 6379, null], [6379, 7591, null], [7591, 7922, null], [7922, 7929, null], [7929, 9028, null], [9028, 9855, null], [9855, 11233, null], [11233, 12308, null], [12308, 12339, null], [12339, 13463, null], [13463, 13463, null], [13463, 13507, null], [13507, 15175, null], [15175, 16840, null], [16840, 18301, null], [18301, 19973, null], [19973, 21693, null], [21693, 21735, null], [21735, 22947, null], [22947, 23472, null], [23472, 24312, null], [24312, 24323, null], [24323, 25185, null], [25185, 25235, null], [25235, 26030, null], [26030, 26030, null], [26030, 28565, null], [28565, 29457, null], [29457, 30422, null], [30422, 32071, null], [32071, 32125, null], [32125, 32125, null], [32125, 32167, null], [32167, 33365, null], [33365, 33467, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33467, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33467, null]], "pdf_page_numbers": [[0, 156, 1], [156, 570, 2], [570, 901, 3], [901, 1247, 4], [1247, 1269, 5], [1269, 2807, 6], [2807, 3217, 7], [3217, 4127, 8], [4127, 4652, 9], [4652, 5510, 10], [5510, 6379, 11], [6379, 7591, 12], [7591, 7922, 13], [7922, 7929, 14], [7929, 9028, 15], [9028, 9855, 16], [9855, 11233, 17], [11233, 12308, 18], [12308, 12339, 19], [12339, 13463, 20], [13463, 13463, 21], [13463, 13507, 22], [13507, 15175, 23], [15175, 16840, 24], [16840, 18301, 25], [18301, 19973, 26], [19973, 21693, 27], [21693, 21735, 28], [21735, 22947, 29], [22947, 23472, 30], [23472, 24312, 31], [24312, 24323, 32], [24323, 25185, 33], [25185, 25235, 34], [25235, 26030, 35], [26030, 26030, 36], [26030, 28565, 37], [28565, 29457, 38], [29457, 30422, 39], [30422, 32071, 40], [32071, 32125, 41], [32125, 32125, 42], [32125, 32167, 43], [32167, 33365, 44], [33365, 33467, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33467, 0.13314]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
5e44e967c42a8a4404494553a38aaf2fa45dd156
The following full text is a preprint version which may differ from the publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/195380 Please be advised that this information was generated on 2018-10-16 and may be subject to change. Inferring OpenVPN State Machines Using Protocol State Fuzzing Lesly-Ann Daniel University of Rennes 1 - ENS Rennes Email: lesly-ann.daniel@ens-rennes.fr Joeri de Ruiter Radboud University Email: joeri@cs.ru.nl Erik Poll Radboud University Email: erikpoll@cs.ru.nl Abstract—The reliability of a security protocol is of the utmost importance but can easily be compromised by a vulnerability in the implementation. A crucial aspect of an implementation is the protocol's state machine. The state machine of an implementation can be inferred by black box testing using regular inference. These inferred state machines provide a good insight into implementations and can be used to detect any spurious behavior. We apply this technique to different implementations of OpenVPN: the standard OpenVPN and the OpenVPN-NL implementations. Although OpenVPN is a widely used TLS-based VPN solution, there is no official specification of the protocol, which makes it a particularly interesting target to analyze. We infer state machines of the server-side implementation and focus on particular phases of the protocol. Finally we analyze those state machines, show that they can reveal a lot of information about the implementation which is missing from the documentation, and discuss the possibility to include state machines in a formal specification. I. INTRODUCTION Virtual Private Network (VPN) solutions are widely used to establish secure data transmissions over insecure channels (e.g. a public Internet connection). This technology can be used by companies to connect geographically separated offices or to allow remote workers to access the company network. VPNs use a tunneling mechanism to provide an additional layer to ensure confidentiality, authentication and integrity independent of the underlying protocol. The security of the protocols used to achieve this can easily be compromised by a vulnerability in the implementation. To automatically detect some of these vulnerabilities we can use formal methods. Regular inference, or protocol state fuzzing, is a technique to infer a state machine from the implementation of a protocol [1]. The inferred state machine provides a useful insight into the choices and errors made in the implementation. It should allow all the transitions defined by the grammar of the protocol and react appropriately to unexpected messages, by ignoring the message or dropping the connection. The inferred state machine can be analyzed to detect any logical flaws and to check compliance of the implementation with its specification. It can also reveal superfluous states and transitions which should be removed as a precaution. Finally, it gives a good overview of the sequence of messages (which is often not well specified), and can be used to automatically define a formal specification of the protocol [2]. Fig. 1. Example of OpenVPN tunneling IP packets over an UDP channel. The gray message benefits from the security properties (confidentiality, integrity, authentication) provided by the enclosing OpenVPN protocol. This paper focuses on the OpenVPN protocol [3], which is based on TLS. Even though it is a widely used VPN solution, it has not been subject to a lot of research and no formal specification exists of the protocol. Indeed, there is no documentation about the sequence of messages leading to a successful connection nor about the correct behavior in response to receiving unexpected messages - even though this is essential for a security protocol. We use LearnLib [4] to infer state machines of two different OpenVPN servers: the standard OpenVPN implementation based on OpenSSL, and the OpenVPN-NL implementation based on PolarSSL. For each of them we infer several states machines that focus on particular phases of the protocol. We manually analyze those state machines and show that they reveal a lot of information about the implementation that cannot be found in any documentation. Finally, we discuss how state machines can be used to define a formal specification of the protocol. The OpenVPN protocol is introduced in Section II and protocol state fuzzing in Section III. We present our experimental setup in Section IV and the results of our analysis in Section V. Finally, the related work is discussed in Section VI and we conclude in Section VII. II. THE OPENVPN PROTOCOL OpenVPN provides tunneling to provide confidentiality, authentication and integrity for the data transmitted. The entire message to be transmitted (IP packet or Ethernet frame, including its meta-data like sender and recipient) is encapsulated within an OpenVPN message, as illustrated in Figure 1. For an in-depth presentation of OpenVPN, see [3], the doxygen-generated documentation [5], or the security overview on the OpenVPN website [6]. OpenVPN offers a large choice of security options. It is based on the OpenSSL library, which is used for its TLS session negotiation, its encryption and authentication and its random number generation primitives. Two methods of key exchange are provided: a pre-shared key and a TLS-based mechanism. The rest of the paper will focus on the TLS mode which is more complex and involves key exchange and re-keying, contrary to the pre-shared key mode which is more straightforward. In both methods, each peer possesses four independent and unidirectional session-keys: HMAC-send, HMAC-receive, encrypt and decrypt, used to encrypt and MAC the data messages. The TLS mode is based on TLS, which has been subject to a lot of research [7, 8, 9, 10, 11, 12, 13, 14, 15]. A TLS session with bidirectional authentication is negotiated between the client and the server (i.e. both parties must present their own certificate) and is used to securely establish the session-keys. There are two methods of session keys establishment: in key-method 1, each peer generates their own cipher and HMAC keys and sends them to the other; while in method 2, the keys are computed by mixing random material from both parties using the TLS pseudo-random function (PRF). Once both peers have received the session keys, the data tunneling can start: the actual data (IP packet or Ethernet frame) to transmit are encrypted, MAC-ed and encapsulated within a DATA message. Figure 2 shows the normal sequence of messages leading to a successful connection. OpenVPN multiplexes the control channel and data channel over a single network stream which is not necessarily reliable. OpenVPN actually prefers to use UDP transport instead of TCP, due to the TCP reliability layer collisions when tunneling TCP over TCP\(^1\). Because TLS is designed to operate over a reliable channel, the control channel is provided with an extra reliability layer, referred to as the OpenVPN’s reliability layer, which consists of a simple acknowledgement mechanism active in both UDP or TCP tunneling. Note that the data channel can still benefit from a reliability layer provided by the encapsulated protocol, e.g. tunneling of a TCP session will benefit from reliability data transfer offered by TCP. The OpenVPN implementation also offers the –tls-auth option to authenticate packets from the control channel by adding an HMAC to the control messages. This mechanism allows OpenVPN to quickly throw away unauthenticated packets, without wasting resources and thus protecting against DoS attacks, and it reduces the attack surface for finding exploitable software bugs for any Man-in-the-Middle attacker. ### III. Protocol State Fuzzing Protocol state fuzzing is defined in [7] as a technique that uses regular inference (a.k.a. automata learning or state machine inference) to infer a state machine from a protocol implementation. Regular inference uses black-box fuzzing on the order of well-formed messages to automatically infer a state machine which models the implementation of a system, based on its external behavior. In this paper, those state machines are represented as Mealy machines. #### A. Mealy Machines A Mealy machine is a finite state machine with output, in which a transition, based on the current state and input, will result in a change of state and produce an output. The Mealy machines we use are deterministic, i.e. for each input and current state, only one transition is possible. Figure 3 shows a graphical representation of a simple Mealy machine. The transition from the state s1 to the state s2 labeled B/C means that if the state machine is in state s1 and receives an input B, then it will switch to state s2 and produce the output C. We will use Mealy machines to model the behavior of the OpenVPN server. They describe how the server reacts in response to input messages: which output it produces and how its state is affected. The next part discusses how state machines can be automatically inferred. --- \(^1\) [http://sites.inka.de/sites/bigred/devel/tcp-tcp.html](http://sites.inka.de/sites/bigred/devel/tcp-tcp.html) --- ![Fig. 3. A Mealy machine with 3 states](image-url) B. Regular Inference The state machine of the OpenVPN server is inferred using regular inference, a technique based on black-box fuzzing where well-formed packets are sent to the server and the output is used to infer a state machine. The regular inference primitives are provided by the LearnLib library [4]. The system which is analyzed, namely the OpenVPN server, is referred to as the system under learning (SUL) and its state machine is denoted by \( M \). The regular inference involves two actors: a learner (the LearnLib library) and a SUL (the OpenVPN server). The learner has no initial knowledge about \( M \) but is provided with an input alphabet upon which it will build queries and ask them to the SUL. A fundamental property that must be ensured is the independence of subsequent queries. Therefore, between each query, the SUL must be reset to its initial state. In our case, it is effectively done by killing the OpenVPN server process and starting a new one. The learner is composed of two parts: the learning algorithm and the equivalence algorithm. The learning algorithm will keep sending membership queries (i.e. what is the response to a sequence of input symbols?) to the SUL until it comes up with a strong hypothesis. Then the hypothesized state machine \( H \) is passed to the equivalence algorithm which will answer an equivalence query (i.e. is the hypothesized state machine \( H \) equivalent to \( M \)?). As we cannot know for sure whether a hypothesis is equivalent to the implemented state machine, we need to approximate this check. If \( H \) is deemed equivalent to \( M \), then \( H \) is returned as the model of the SUL. Else, the equivalence algorithm returns a counterexample which is used to refine the hypothesis and the learning algorithm is resumed until it finds a new strong hypothesis. The learning algorithm used in this paper is Niese’s modified version of Angluin’s \( L^* \) algorithm which can be used to infer Mealy machines [1, 16]. The equivalence algorithm is a modified version of the W-method [17], refined to cut off entire search branches based on the fact that once a connection is closed by the server it will remain closed [7]. Therefore, we stop building queries over prefixes that end up with a closed connection. Finally the test harness (detailed in Section IV-A) controls the communication between the learner and the SUL. IV. SETUP The servers we test run on a VMware virtual machine hosted on the same computer as the learner and can be started or reset via SSH. The \( L^* \) learning algorithm and the W-method equivalence algorithm are both provided by LearnLib. A. Test Harness The main challenge to infer a correct state machine is to prepare the application specific learning setup, i.e. the test harness. This includes determining a suitable abstraction of input and output messages, and finding ways to manage concrete runtime data that influences the behavior of the target system [18]. Consequently, the test harness consists of a mapper and a monitor. The test harness implementation is based on the previous work of de Ruiter and Poll [7] on TLS. The source code is available at https://github.com/jderuijter/statelearner/tree/openvpn. Note that the test harness can be reused to analyze several versions of the protocol as long as the language of messages has not been changed. The learner is provided with an abstract input alphabet upon which it builds queries intended for the OpenVPN server. However, the OpenVPN server expects actual messages and not the abstract symbols from the learner’s input alphabet. Similarly, the learner will expect the responses from the server as abstract symbols from its output alphabet. Therefore, the test harness contains a mapper which translates the abstract symbols to the actual OpenVPN packets, and vice versa. Note that the level of abstraction will affect the final learned model: a compromise must be made between the precision of the model and the learning complexity. The monitor is in charge of building correct system inputs, based on concrete runtime data that influence the behavior of the system; basically, it consists of a stateless OpenVPN client. For example, it sends the messages through the network, processes the responses to recover important information (e.g. session ids and keys), handles the acknowledgement process, and implements the security primitives in the way expected by the server (e.g. valid authentication, encryption and signatures). Managing this runtime data requires a deep understanding of OpenVPN to make decisions concerning the semantics of the abstract input symbols, which will affect the final state machine. Since there is no formal specification of the OpenVPN protocol, low level information was not straightforward to get. We mainly relied on Wireshark traces, the doxygen-generated documentation [5], and the security overview [6]. When more in-depth analysis was needed, we used the OpenVPN source code and the server logs with maximum verbose output. B. Nondeterminism Issues That the SUL is deterministic is of paramount importance since LearnLib can only learn state machines of deterministic systems. Nondeterministic behavior of the SUL can produce a wrong model, or cause unexpected behavior of the learner, incl. non-termination. Unfortunately, the OpenVPN server has some nondeterministic behaviour that we have to hide to the learner to be able to infer a state-machine. The less frequent the nondeterministic behavior is, the harder it is to catch, which is very insidious because long learning phases can turn out to be unsuccessful because of one wrong query. When nondeterminism is suspected (e.g. because the learner does not terminate), we manually analyze the query cache to find the query with a nondeterministic answer. The defective query can be analyzed further to track down the cause of nondeterminism through the log file of the server and the Wireshark traces. When the source of nondeterminism is identified, we design a “trick” to work around it. There are two main causes of nondeterminism. First, the UDP connection between the client and the server is not reliable, so packet loss may be a cause of nondeterminism. We did not expect this kind of behaviour, since our server is simply running on a virtual machine on the same computer as the client. However, with some configurations of the VM (e.g. when using NAT connection, or during time-synchronization of the VM), we did experience that sometimes the connection dropped, which caused the learning process to fail. The solution is to adapt the configuration of the VM to circumvent these issues (e.g. turn to host-only connection, disable automatic time synchronization). Second, there are multiple timeouts and delays on the server side, e.g. the reset time of the server, the time to process the messages, and the TCP and UDP timeouts. The response to a query may vary depending on those timing-related events, which is seen as nondeterminism by the learner. The solutions we adopted to work around this timing-related nondeterminism often implied longer sleeping-times or timeouts on the client side. This has a big impact on the learning time and constituted the main bottleneck of the learning process. Choosing the appropriate timeouts and sleeping-times is a challenging issue: under-approximating them may cause nondeterminism in the learning process and make it fail, but setting them too long can significantly slow down the learning process. For example, for set the UDP and TCP timeouts we started with low values and increased them until there were no more packet losses (100 ms for UDP and 800 ms for TCP). In addition, in order to prevent a wrong counterexample to be added to the hypothesis after an equivalence query, we modified the equivalence algorithm to detect nondeterminism. Each time a counter-example is found by the equivalence algorithm, the query is processed again to check whether the outputs match. If both outputs are the same, we assume that the output is correct and that a counterexample has been found, otherwise, an exception is raised for nondeterminism. This simple modification could be added to LearnLib as an option to detect nondeterminism. The message replay was good enough in our situation because the probability of nondeterminism is very low (less than 1/100): however, it may not work for higher probabilities of nondeterminism. In general, nondeterminism can either be caused by a nondeterministic target, which therefore it cannot be modeled as a Mealy machine so it is out of the scope of our approach, or an unreliable environment (e.g. packet losses) which we can try to resolve by making the environment more reliable (e.g. increasing the timeouts or replaying the packets the appropriate number of time given the probability of packet loss). C. Input Alphabet for Learning In order to keep the learning complexity low, we only include messages that would be accepted given the server configuration and we abstract away the acknowledgement mechanism. So, the SERVERHARDRESET message and the messages for key-method 2 which result in a closed connection are not included. We also only include TLS messages required to establish a successful OpenVPN session. An OpenVPN session is considered successful when the initialization sequence is complete and the data tunneling can start. To detect a successful data exchange, we use the OpenVPN tunnel to send a ping request to the server. If the exchange is successful the server will send back a ping response through the tunnel. Depending on the input alphabet and on the monitoring part, the inferred state machine can change significantly. The learner was run with several input alphabets providing different levels of abstraction, to infer various state machines and highlight different behaviors of the server (which are detailed in Section V). This also permits to lower the complexity of the learning process by reducing the size of the input alphabet and the final number of states in the model. V. RESULTS We analyzed two different implementations of OpenVPN: OpenVPN 2.3.10 using OpenSSL 1.0.2G, referred to as OpenVPN, and OpenVPN-NL\(^2\) based on OpenVPN 2.3.9 using POLARSSL 1.2.19, referred to as OpenVPN-NL. OpenVPN-NL is a stripped and hardened version of OpenVPN, intended for Dutch government use, which disallows insecure configurations. The server is configured to use key-method 1 and not the tls-auth option. Both UDP and TCP modes were analyzed and turned out to behave differently. In order to keep the learning complexity low, we chose to split the analysis into several parts. Each part focuses on a particular phase of the protocol. The first part focuses on the OpenVPN session initialization, the second part on the TLS handshake and the last part on the re-keying process. For each state machine, the sequence of messages leading to a successful OpenVPN tunnel, the happy-flow, is indicated with bold edges. The state ‘0’ refers to the initial state, the state ‘ISC’ (Initialization Sequence Complete) is the state from which the data tunneling can actually start and the state ‘X’ refers to a closed connection. A. The OpenVPN Session Initialization From the documentation\(^3\) and server logs, we can see that the OpenVPN implementation stores its OpenVPN sessions in three session slots. The first slot contains the active session (i.e. the session which initialization sequence is complete and which can process DATA messages), the second slot contains the untrusted session being negotiated, and the last slot contains the old session. Note that those session slots are an implementation choice and not a fundamental aspect of the OpenVPN protocol. Each OpenVPN session is initiated with a CLIENTHARDRESET message (CHRv1) that has a unique session-id. However, the expected impact of the CHRv1 on the server is not specified in the documentation. Therefore we tried to highlight it by building a state machine over three input symbols: CHRv1 initiates a new session, TLS:FULL:SESSION \(^{2}\)https://openvpn.fox-it.com/ \(^{3}\)See https://build.openvpn.net/doxygen/html/group__control__processor.html#details for more details on the tls_session structures. treats the entire TLS-based key exchange as one atomic step, and DataPingReq sends a ping request through the tunnel. In UDP mode, the session-keys can be renegotiated by sending a new CHRv1, (the dashed loop in Figure 5). This is not possible in TCP mode since only the first CHRv1 can result in a successful connection, whereas the others eventually result in a closed connection, as can be seen in Figure 4. We found an explanation for this difference: in UDP mode the server cannot know if the connection is closed on the client side, contrary to TCP mode. Therefore, if the client reconnects and tries to initiate a new session by sending a new CHRv1, the server can process the CHRv1 and the new session can be seamlessly renegotiated. In UDP mode, two sessions can be under negotiation at the same time, but only if there is no active session. This can be seen in Figure 5 from the path $0 \rightarrow 1 \rightarrow 2$ as the first two CHRv1 trigger a response from the server, but after reaching the state ISC, only one CHRv1 triggers a SHRv1 (i.e. path ISC $\rightarrow 3$). This is because when the active session slot is empty, it is used to store the first untrusted session (the others are stored in the second slot). Figure 5 also shows that in UDP mode, a session initiated with a CHRv1 message can succeed without triggering a SHRv1 message. For instance following the path $0 \rightarrow 1 \rightarrow 2 \rightarrow 2 \rightarrow ISC$ with the sequence of messages CHRv1/SHRv1 \rightarrow CHRv1/SHRv1 \rightarrow CHRv1/EMPTY \rightarrow TLS:FULLSESSION/SUCCEEDED, the active session-id will be the one introduced by the third CHRv1 message which triggers no response from the server. This behavior, which is quite confusing but not insecure, is based on the fact that the server only respond with a SHRv1 when filling a new session slot. The first and second CHRv1 fill the first and second slots but the third CHRv1 just overrides the second session in the second slot. These differences also explain why in UDP mode two paths can lead to a successful session (i.e. $0 \rightarrow 1 \rightarrow 2 \rightarrow ISC$ and $0 \rightarrow 1 \rightarrow ISC$ in Figure 5), while in TCP mode there is only one path ($0 \rightarrow 1 \rightarrow ISC$ in Figure 4). Finally from the server logs we observe that in TCP mode, the structure containing the second session is allocated when receiving the CHRv1 but the corresponding SHRv1 is never sent and the session is stuck in the S_PRE_START state. However, the subsequent TLS messages are processed by the server though the responses to the TLS:CLIENTHELLOALL are not forwarded to the client. Finally if the TLS handshake is continued, the TLS:CERTIFICATEVERIFY triggers an error for a bad signature and the connection is dropped by the server. The state machines of the OpenVPN server and the OpenVPN-NL server are the same, which makes sense since the OpenVPN-NL implementation is based on the OpenVPN implementation. The TCP and the UDP modes differ in the way they handle the sessions, which is not specified in the documentation and is quite surprising, even though it does not seem insecure. Starting with a TLS message in TCP mode results in a closed connection ($0 \rightarrow X$ in Figure 4). Conversely, these messages are ignored in UDP mode ($0 \rightarrow 0$ in Fig. 5) since all UDP messages with an unknown session-id are ignored by the server. --- **Fig. 4.** State machine of an OpenVPN or OpenVPN-NL server running in TCP mode. **Fig. 5.** State machine of an OpenVPN or OpenVPN-NL server running in UDP mode. --- 4 See https://build.openvpn.net/doxygen/html/group__control__processor.html for more details on session states In order to make the state machine simpler we change CHRV1 to wCHRv1 that focuses on only one session by keeping the previous session-id and TLS session parameters. Resetting the packet-id in wCHRv1 (as done in CHRV1) introduces an issue w.r.t. the acknowledgement mechanism because the CONTROL messages with a known packet-id are considered to be replayed packets by the server. Thus, the responses of the server after wCHRv1 would depend on the number of control messages previously sent and the server could no longer be modeled as a finite Mealy machine. For this reason, wCHRv1 does not reset the packet-id, unlike CHRV1. Figure 6 shows the resulting state machines for OpenVPN and OpenVPN-NL. Most messages resulting in a closed connection have been removed for readability and the sequence of messages from Tls:ClientCertificate to Tls:Finished has been condensed into the TlsHsk state. ``` 0 DataPingReq/Empty wCHRv1/SHRv1 1 DataPingReq/Empty wCHRv1/SHRv1 2 wCHRv1/SHRv1 3 wCHRv1/SHRv1 ISC 2 KeyNeg1/Tls:ApplicationData wCHRv1/Ack 3 wCHRv1/Ack 4 wCHRv1/Ack X ``` Fig. 6. State machine of an OpenVPN server. The dotted edges correspond to a transition specific to OpenVPN and the underlined messages are specific to OpenVPN-NL. The differences between the OpenVPN and OpenVPN-NL state machines are only due to the different TLS implementations and cipher suites they use. As expected, the OpenSSL state machine included in the OpenVPN state machine and the PolarSSL state machine included in the OpenVPN-NL state machine are similar to those inferred in [7]. For example, the OpenSSL implementation does not return an error when a CHANGE_CIPHERSPEC is sent before a CLIENTHELLOALL, hence the dead-end state 4 where the TLS session can no longer succeed, which is specific to OpenVPN, as OpenVPN-NL simply closes the connection in this case. The OpenVPN-NL implementation is more permissive in some other situations. When the TLS handshake is complete (in states ISC and 3) and an extra TLS handshake message is sent, OpenVPN-NL returns an ALERT (see the underlined labels), whereas OpenVPN closes the connection. OpenVPN uses TLS_RSA_WITH_AES_128_CBC_SHA as its cipher suite, whereas OpenVPN-NL uses the cipher suite TLS_DHE_RSA_WITH_AES_256_CBC_SHA. This difference explains the extra SERVERKEYEXCHANGE in the OpenVPN-NL state machine which is only sent when using Diffie-Hellman (DH) key exchange. Both implementations allow the client to send several KEYNEG1 messages over the TLS session, but only the first one is processed and the others are ignored. In our test harness, we made the choice to generate and send fresh session-keys (used to encrypt and MAC the DATA messages) when sending a new KEYNEG1 message. This results in the extra state 3 which highlights the fact that when the server receives a DATA message encrypted and MAC-ed with the wrong keys, it will drop the connection resulting in the DATAPINGREP/CONNECTIONCLOSED transition from state 3 to X. Finally there is a difference in TCP and UDP modes (Figure 7) because the acknowledgement process is not respected in TCP mode (which is not specified in the documentation). Starting the communication with a CONTROL message different from wCHR1 results in the dead-end states 2 and 3 because in state 2, the server receives a wCHR1 with a packet-id n > 0 and waits for the messages with a packet-id lower than n in state 3. Therefore, the subsequent TLS messages are not processed. ``` 0 DataPingReq/Empty wCHRv1/SHRv1 1 Other/Empty wCHRv1/SHRv1 2 Other/Empty 3 Other/Empty X ``` Fig. 7. Subset of the state machine of an OpenVPN or OpenVPN-NL server, focusing on the particularity of the UDP mode. State 1 and its subsequent states are identical to the TCP version. C. The Key Renegotiation Mechanism In OpenVPN, renegotiation of the session keys can be triggered automatically with a SOFTRESET message after a certain number of bytes, packets or seconds by either the client or the server. To focus on the effect of this SOFTRESET message, we infer a state machine using the following input symbols: wCHRv1, TLS:CLIENTHELLOALL, TLS:FullHandshake (which contains the TLS messages from TLS:ClientKeyExchange to TLS:Finished), KeyNeg1, DataPingReq and SoftReset. The inferred state machines for OpenVPN and OpenVPN-NL are similar, except for the responses containing TLS alerts and the extra SERVER KeyExchange message mentioned in Section V-B. As expected, Figure 8 shows that the key renegotiation mechanism can only be triggered after the OpenVPN session is initiated, i.e. in states ISC and 4. The SoftReset messages sent before the ISC state end up in a closed connection which is a safe behavior to adopt in a security protocol. ![State machine of an OpenVPN-NL server running in TCP mode](image) **Fig. 8.** State machine of an OpenVPN-NL server running in TCP mode, highlighting the key renegotiation mechanism. The dashed labels show the successful soft reset messages. Messages resulting in a closed connection have been removed for readability. After a successful SoftReset message, the state machine goes to state 1 and the DATA messages are no longer processed by the server (in states 1, 2 and 3 the DATA messages are ignored). This is the result of a choice we made in the test harness. The key_id that identifies the session-keys of a particular DATA message has been incremented by the SoftReset but the second pair of keys has not been negotiated yet. The server will ignore the subsequent DATA messages with the wrong key_id and, as a result, the state machine is simpler since a successful SoftReset results in a transition to state 1 instead of creating some new state where a DATA message using the old session-keys would trigger a response from the server. The UDP mode is different from the TCP mode and is a good example of the limitations of LearnLib. When the active session-keys are renegotiated via a SoftReset, if initialization sequence fails then the active session state is set as ERROR and a new session is initiated by the server which waits for 56 seconds and sends a SERVER HardReset. The test harness cannot differentiate this behavior from the regular ‘no reply’ case unless it waits for 1 min to catch the SERVER HardReset each time there is no reply from the server. If the client does not wait, the SERVER HardReset will eventually be caught as a reply to another message, which introduces nondeterminism in the learning process. In this situation the long timing-related event cannot be suppressed and trying to infer a state machine would be too time consuming. **D. Documentation Issues** During the construction of the test-harness we encountered several complications that are worth noting for future work on the OpenVPN protocol, listed here in decreasing order of importance. First, the sequence of messages leading to a successful tunnel is not explicitly documented, which makes it challenging for a developer to come up with a new OpenVPN implementation. Especially the behavior in case of erroneous messages is not specified, even though it is essential for a security protocol implementation to handle those error cases properly. This sequence of messages could be added to the documentation as a protocol state machine similar to those presented in this paper. For instance, Figure 8 gives a good overview of the sequence of messages that establishes an OpenVPN session. In the documentation the expected behavior when receiving a HardReset or SoftReset message is not made explicit. It is not specified how the different fields of the messages must be handled or how the messages should affect the server and the client. Moreover, in the implementation it is not clear when a CLIENT HardReset is taken into account by the server, since it does not always trigger a SERVER HardReset. Finally, the differences between the UDP and TCP modes are not mentioned in the documentation but are clearly visible in the inferred state machines. The padding algorithms used for encryption⁵ are not specified in the documentation and it would be helpful to have them documented in the Data Channel Crypto Module⁶. Moreover, in the Data channel key generation section⁷, the process used by OpenVPN to perform key expansion in key-method 2 is only documented by a reference to the source code. It would be helpful to include some more documentation on the key expansion function and the pseudorandom function. Finally, we reported a mistake in the security overview [6] and the documentation⁸ which has not been corrected yet. The order of the fields of the Key Negotiation message in key-method 1 do not match the implementation: the documentation reports cipher-key length, cipher-key, HMAC-key length and HMAC-key, but the message actually starts with cipher-key length and HMAC-key length. --- ⁵We used the Java PKCS5Padding for Blowfish/CBC and AES/CBC. ⁶https://build.openvpn.net/doxygen/html/group__data__crypto.html ⁷https://build.openvpn.net/doxygen/html/group__key__generation.html ⁸https://build.openvpn.net/doxygen/html/group__network__protocol.html VI. RELATED WORK IN PROTOCOL STATE FUZZING The idea of using regular inference to analyse implementations of security protocols dates back to at least Shu and Lee [19]. An extensive survey of this and other techniques to reverse engineer protocol implementations has been given by Narayan et al. [20]. Regular inference with LearnLib has been applied to analyze implementations of EMV payment cards [21], biometric passports [22], TLS [7, 23], and SSH [24]. Nearly always different implementations of the same protocol turn out to have different state machines, so regular inference can be used to fingerprint a particular implementation. In most cases the impact of fingerprinting is limited, but it can leak confidential information; for example, a comparison of e-passport implementations from ten different countries showed that the nationality can be determined from each implementation’s fingerprint [25]. For several TLS implementations regular inference revealed new security vulnerabilities [7]; the FREAK attack on TLS [26] already showed security flaws caused by flawed implementations of the TLS state machine, which might have been found using regular inference. Regular inference has been extended using predicate abstraction [27] to consider the influence of data on the control flow. In [18], Merten et al. proposed a systematic method to implement a test harness for LearnLib, including a mapper and a data monitoring part. There has also been research into inference of timed automata [28, 29]. Such techniques might be used to analyze protocol implementations including their timing behaviour and possibly avoid the problem of timing-related nondeterminism that we ran into. Concerning OpenVPN, Vranken [30] developed fuzzers based on libFuzzer to analyze the OpenVPN implementation and found four important security vulnerabilities. VII. CONCLUSION We presented an automated analysis of two OpenVPN implementations using a technique called protocol state fuzzing, which uses regular inference to infer state machines of the OpenVPN server. This approach is able to find logical flaws in the state machine of implementations, but cannot detect, for instance, flaws caused by malformed messages, such as the recent OpenVPN flaws found using fuzzing [30]. We analyzed the inferred state machines manually, as they are relatively small; for bigger state machines, one could consider using a model checker to formally verify properties, as done in [24]. Our analysis abstracts from some of the finer details of the implementations. First, the state machine is dependent on the test harness which defines the input alphabet and semantics of the messages. Our test harness intentionally conceals the acknowledgement mechanism and the smooth transition of the key renegotiation mechanism. These concessions to the precision of the model are necessary to keep the learning complexity low and reduce the learning time. Second, the timing-related events which plays a great role in the protocol cannot be modeled in a simple Mealy machine and therefore must be abstracted. Modeling timing-related events would require a more complex model of timed-automata with output, which can currently not be inferred from real systems. In addition, those timing-related events cause nondeterminism in the learning process which can only be handled by introducing timeouts and delays in the test-harness. They are the main bottleneck of the learning process and can explain the learning time ranging from about 40 minutes to 49 hours. Building a test harness essentially involves re-implementing an OpenVPN client, able to send correct messages to the server in any order. This is a difficult and time-consuming task for a specific protocol, so it is more worthwhile if the test harness can be reused to analyze many implementations, as has been done for TLS [7, 23] or SSH [24]. In the case of OpenVPN there are not as many implementations but the test harness can be reused to analyze the different versions. The inferred state machines provide a useful insight into the decisions - and errors - made in the implementation. They can be used to easily spot superfluous states and transitions, which then warrant closer analysis as they may introduce security flaws. In a security evaluation, they can be used to harden the implementation by simplifying the state machine, reducing the risk of vulnerabilities. They can also be used to automatically infer a specification from an implementation, that could be automatically updated throughout the software evolution. The inferred state machines for the implementations of OpenVPN servers did not reveal any vulnerabilities, and they comply to what would be expected from a security protocol. Security-critical errors, such as failures in the TLS handshake and failed integrity checks or decryption of DATA messages, always result in a closed connection. The servers also ignore incorrect messages, such as messages with an unknown session-id, KEYNEG messages sent after the session initialization, or DATA message with a wrong key-id. So our results increase the confidence in the tested OpenVPN implementations. It is a shame that the message sequence leading to a successful OpenVPN connection or the correct behavior when receiving unexpected messages is not specified clearer in the OpenVPN documentation. This information could easily be specified by one (or several) protocol state machine such as we inferred. The documentation would really benefit from the addition of such state machines, e.g. the one given in Figure 8 which gives a good overview of the sequence of messages used to establish an OpenVPN session. Alongside such a state machine, a prose specification alongside such a state machine could then describe the main timing-related events and more details on how to handle error cases such as unexpected or incorrect input messages. REFERENCES
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/195380/195380.pdf?sequence=1", "len_cl100k_base": 8379, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34541, "total-output-tokens": 10779, "length": "2e13", "weborganizer": {"__label__adult": 0.0004134178161621094, "__label__art_design": 0.00038313865661621094, "__label__crime_law": 0.0006680488586425781, "__label__education_jobs": 0.0031108856201171875, "__label__entertainment": 0.00015974044799804688, "__label__fashion_beauty": 0.00019919872283935547, "__label__finance_business": 0.00037932395935058594, "__label__food_dining": 0.00041794776916503906, "__label__games": 0.0011072158813476562, "__label__hardware": 0.001972198486328125, "__label__health": 0.0007114410400390625, "__label__history": 0.0004673004150390625, "__label__home_hobbies": 0.00011789798736572266, "__label__industrial": 0.0006189346313476562, "__label__literature": 0.0005559921264648438, "__label__politics": 0.00040268898010253906, "__label__religion": 0.0004916191101074219, "__label__science_tech": 0.324462890625, "__label__social_life": 0.0001807212829589844, "__label__software": 0.0292510986328125, "__label__software_dev": 0.6328125, "__label__sports_fitness": 0.0003325939178466797, "__label__transportation": 0.0006608963012695312, "__label__travel": 0.0002384185791015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45565, 0.02722]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45565, 0.48821]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45565, 0.8911]], "google_gemma-3-12b-it_contains_pii": [[0, 294, false], [294, 5092, null], [5092, 9269, null], [9269, 15338, null], [15338, 21556, null], [21556, 25268, null], [25268, 29505, null], [29505, 34488, null], [34488, 40663, null], [40663, 45565, null]], "google_gemma-3-12b-it_is_public_document": [[0, 294, true], [294, 5092, null], [5092, 9269, null], [9269, 15338, null], [15338, 21556, null], [21556, 25268, null], [25268, 29505, null], [29505, 34488, null], [34488, 40663, null], [40663, 45565, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45565, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45565, null]], "pdf_page_numbers": [[0, 294, 1], [294, 5092, 2], [5092, 9269, 3], [9269, 15338, 4], [15338, 21556, 5], [21556, 25268, 6], [25268, 29505, 7], [29505, 34488, 8], [34488, 40663, 9], [40663, 45565, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45565, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
f31c1e331d8cb4c59db6e4fd5d35223f77dddb11
A Uniform Random Test Data Generator for Path Testing Arnaud Gotlieb, Matthieu Petit To cite this version: HAL Id: inria-00540283 https://inria.hal.science/inria-00540283 Submitted on 26 Nov 2010 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A Uniform Random Test Data Generator for Path Testing Arnaud Gotlieb and Matthieu Petit INRIA Rennes - Bretagne Atlantique Research Centre 35042 Rennes Cedex, France {Arnaud.Gotlieb, Matthieu.Petit}@irisa.fr Tel: +33 (0) 2 99 84 75 76 – Fax: +33 (0) 2 99 84 71 71 Abstract Path-oriented Random Testing (PRT) aims at generating a uniformly spread out sequence of random test data that execute a single control flow path within a program. The main challenge of PRT lies in its ability to build efficiently such a test suite in order to minimize the number of rejects (test data that execute another control flow path). We address this problem with an original divide-and-conquer approach based on constraint reasoning over finite domains, a well-recognized Constraint Programming technique. Our approach first derives path conditions by using backward symbolic execution and computes a tight over-approximation of their associated subdomain by using constraint propagation and constraint refutation. Second, a uniform random test data generator is extracted from this approximated subdomain. We implemented this approach and got experimental results that show the practical benefits of PRT based on constraint reasoning. On average, we got a two-order magnitude CPU time improvement over standard Random Testing on a set of paths extracted from classical benchmark programs. Key words: Random testing, Path Testing, Constraint reasoning 1 Introduction Path testing is one of the most popular white-box testing techniques. It was introduced more than thirty years ago by Howden [18] and has continuously been developed since then. It consists in selecting some paths within a program, finding input test data that activate these paths and checking the results of the executed paths against an oracle. Associated to each selected path, there is a subdomain of 1 This paper is an extended version of Ref.[15] the input domain that is considered as covered when one of its points is selected and submitted to the program. The property saying that each point of the subdomain is interchangeable has been called “reliability” by Goodenough and Gerhart [13] while Hamlet and Taylor called it “homogeneity” in the context of Partition testing [16]. The main principle that underlies path testing says that testing each selected path with a single point from a homogeneous subdomain suffices to get confidence in the program path correctness. Though this is a reasonable assumption, Hamlet and Taylor also explained that when this strategy fails, “it is technically because a subdomain lacked homogeneity” and suggested that a uniform distribution across each subdomain would be more appropriate as we cannot evaluate homogeneity a priori. In fact, test data that cause the same path to be executed do not have the same failure-revealing capability and sampling over the associated subdomain would increase the probability to select a failure-causing input. In [14], we introduced “Path-oriented Random Testing (PRT)” as a new technique to perform Random Testing at the path level. The idea behind PRT is to apply the nice principle of uniform selection, to the selection of test data that all activate the same path. The advantages of such an approach are the following: it increases the chance of generating a failure-causing input for a given path by giving the same probability to each input from the path subdomain to be selected; it introduces randomness and then objectivity in the test data generation process of path testing; it allows the random testing process to focus on specific paths of the program that are more likely to contain faults. However, there is also a main drawback behind PRT. It requires building a uniform random test data generator for a given path which is a hard problem. As the tester usually ignores the exact subdomain associated to a given path, it cannot easily define a random generator for this subdomain; one is resorted to generate test data from the entire input domain. Test data that execute the selected path are then kept while test data that execute another path are simply rejected. Thus, the challenging problem in PRT consists in building efficiently a “uniform random test data generator” (URTG) by minimizing the number of rejects within the generated random sequence. In this paper, we address this problem by using constraint reasoning over finite domains [17]. We propose an original divide-and-conquer approach that exploits constraint propagation and constraint refutation over finite domains to build an over-approximation of the input subdomain corresponding to a given path. By reasoning on the constraints of path condition (i.e. symbolic constraints on input variables that correspond to a given path), we remove parts of the input domain that are inconsistent with these constraints. The over-approximation should be as tight as possible in order to minimize the rejects during the test data generation. The shape of the over-approximation should also have the property of permitting easily to build an URTG. Though our divide-and-conquer algorithm is based on complex constraint manipulation, we show that the overhead introduced by constraint propagation and refutation can be justified by the gain it offers. In addition, our approach is able to detect some non-feasible paths that cannot be identified with other Random Testing approaches such as adaptive RT [5] or feedback-directed RT [21]. PRT based ush foo(ush x, ush y) { 1. if (x <= 100 && y <= 100) { 2. if (y > x + 50) 3. ... 4. if (x * y < 60) 5. ... } Figure 1. Program foo on constraint reasoning was implemented using the clp(fd) finite domains constraint solver of SICStus Prolog and was evaluated on several C programs. These experiments show that PRT based on constraint reasoning outperforms Random Testing for the uniform activation of a single path. In particular, we got a two-order magnitude CPU time improvement in favor of PRT on the longest path (including 18 function calls) of a C implementation of the Traffic Collision Avoidance System. Outline of the paper. In section 2 we give an overview of PRT based on constraint reasoning on a simple but illuminating example. In section 3 we present some background on symbolic execution and random testing while explanations on how tuning usual Constraint Programming techniques to improve PRT are given in section 4. We present the divide-and-conquer algorithm to perform PRT in section 5 and section 6 contains the experimental results obtained with our implementation. In section 6, we also discuss related work. Finally, we conclude and draw some perspectives in section 7. 2 Motivating example Consider the C program of Fig.1 and the problem of building a URTG for path 1→2→3→4→5. By looking at the decisions of the program, we can see that x and y must range in 0..100. But, the other decisions cannot be tackled so easily. By using a URTG that independently picks up pairs \((x_i, y_i)\) in 0..100 × 0..100 and rejects the pairs \((x_j, y_j)\) that do not satisfy the constraints \(y_j > x_j + 50 \land x_j \cdot y_j < 60\) (rejection method [9]), we get a URTG that solves the problem. However, this approach is highly expensive as it will reject a lot of randomly generated pairs. In fact, by manually analyzing the program, we can see that the average probability of rejecting a pair is not far from \(\frac{99}{100}\) with this approach. Indeed, activating the path 1→2→3→4→5 has a very low probability as only 58 input points over 10201 satisfy the constraints. In contrast, by using constraint propagation and constraint refutation, we can minimize this probability and reduce the length of the generated test suite. By using constraint propagation over finite domains, we get immediately that any solution pair \((x, y)\) must range over the rectangle \(D_1 = (x \in 0..1, y \in 51..100)\) \(^2\) Let’s suppose that ush stands for unsigned short integers which is a correct\(^3\) and tight over-approximation of the solutions of the problem. Building a random test data generator for \(D_1\) is easy as we can still select \(x, y\) independently. This would not have been true if \(D_1\) had the shape of a triangle, for example. Technically, one says that \(D_1\) is an hypercuboid. In addition, by combining domain bisection and constraint refutation, we can get an even tighter over-approximation. \(D_1\) can be fairly divided into 4 subdomains: \((x = 0, y \in 51..75), (x = 1, y \in 51..75), (x = 0, y \in 76..100), (x = 1, y \in 76..100)\). This division is fair as each subdomain has exactly the same number of two-dimensional points. Thanks to constraint refutation, the fourth subdomain can be safely removed from the domain for which we want to build a URTG. Indeed, constraint propagation shows easily that there is no solution of the path conditions in this subdomain. As \(D_2 = (x = 0, y \in 51..75) \cup (x = 0, y \in 76..100) \cup (x = 1, y \in 51..75)\) is the union of subdomains of same areas, we can still easily build a URTG. for \(D_2\) by selecting \(y\) independently from \(x\). In fact, we design our method by keeping this latter constraint in mind. Finally, by using this method, the average probability of rejecting a possible pair in \(D_2\) is just around \(\frac{22}{100}\) (58 input points over the 75 of \(D_2\) satisfy both decisions). 3 Background In this section, we recall how to derive the path conditions associated to a control flow path by using symbolic execution (Sec. 3.1) and the basic principles of Random Testing (Sec. 3.2). 3.1 Symbolic execution 3.1.1 Control Flow Graph The Control Flow Graph (CFG) of a program is a connected oriented graph composed of a set of vertices, a set of edges and two distinguished nodes, \(e\) the unique entry node, and \(s\) the unique exit node. Each node represents a basic block and each edge represents a possible branching between two basic blocks. Programs with multiple exits can easily been tackled by adding an additional exit node. A path is a finite sequence of edge-connected nodes of the CFG which starts on \(e\). As an example, the CFG of the C program `power` is given in Fig.2. This program computes \(x^y\). Note that this program contains a non-feasible path: \((1 \rightarrow 2 \rightarrow 4 \rightarrow 5 \rightarrow 6)\). \(^3\) No solution is lost 3.1.2 Symbolic states Symbolic execution works by computing symbolic states for a selected path. A symbolic state for path \( e \rightarrow n_1 \rightarrow \ldots \rightarrow n_k \) in program \( P \) is a triple \[ (e \rightarrow n_1 \rightarrow \ldots \rightarrow n_k, \{(v, \phi_v)\}_{v \in \text{Var}(P)}, PC) \] where \( \phi_v \) is a symbolic expression associated to the variable \( v \) and \( PC = c_1 \land \ldots \land c_n \) is a set of constraints associated to path \( e \rightarrow n_1 \rightarrow \ldots \rightarrow n_k \), called the path conditions. \( \text{Var}(P) \) denotes the set of variables in \( P \). A symbolic expression is either a symbolic value (possibly \( \text{undef} \)) or a well parenthesized expression composed over symbolic values. In fact, when computing new symbolic expressions, each internal variable reference is replaced by its previously computed symbolic expression. In the program of Fig.2, the symbolic state of path \( 1 \rightarrow 2 \rightarrow 4 \rightarrow 5 \rightarrow 6 \) can easily be obtained by inductively computing the following sequence of symbolic states: \[ \begin{align*} (1, \{(x, X), (y, Y), (w, \text{undef}), (z, \text{undef})\}, \text{true}) \\ (1 \rightarrow 2, \{(x, X), (y, Y), (w, \text{abs}(Y)), (z, 1.0)\}, \text{true}) \\ (1 \rightarrow 2 \rightarrow 4, \{(x, X), (y, Y), (w, \text{abs}(Y)), (z, 1.0)\}, \text{abs}(Y) = 0) \\ (1 \rightarrow 2 \rightarrow 4 \rightarrow 5 \rightarrow 6, \{(x, X), (y, Y), (w, \text{abs}(Y)), (z, 1.0)\}, Y < 0 \land \text{abs}(Y) = 0) \end{align*} \] where \( X \) (resp. \( Y \)) is the symbolic value of the input variable \( x \) (resp. \( y \)). Note that symbolic expressions and path conditions hold only over symbolic input values (except in the presence of floating-point computations [3]). Solving the path conditions yields either to show that the corresponding path is non-feasible or to find a test datum on which the path is executed. In the above example, the path conditions \( Y < 0 \land \text{abs}(Y) = 0 \) have no solution, meaning that the path \( 1 \rightarrow 2 \rightarrow 4 \rightarrow 5 \rightarrow 6 \) is non-feasible. 3.1.3 Forward/backward analysis Symbolic states are computed by induction on their path by a forward or a backward analysis [19]. Each statement of each node of the path is symbolically eval- uated using an evaluation function which computes the symbolic states. Forward analysis follows the statements of the selected path in the same direction as that of actual program execution, whereas backward analysis uses the reverse direction. Backward analysis is usually preferred when one only wants to compute the path conditions, as it saves memory space. Indeed, backward analysis does not require the symbolic expressions to be stored when computing the path conditions. The idea is just to replace local references by symbolic expressions within the path conditions. We illustrate this point on the backward symbolic execution of path 1→2→3→2→4→6. (4→6, {(x, X), (y, Y)}, Y ≥ 0) (2→4→6, {(x, X), (y, Y)}, w = 0 ∧ Y ≥ 0 (2→3→2→4→6, {(x, X), (y, Y)}, w ≠ 0 ∧ w − 1 = 0 ∧ Y ≥ 0) (1→2→3→2→4→6, {(x, X), (y, Y)}, abs(Y) ≠ 0 ∧ abs(Y) − 1 = 0 ∧ Y ≥ 0) 3.2 Random Testing Random Testing (RT) is the process of selecting test data at random according to a uniform probability distribution over the program’s input domain. Although RT has traditionally been considered as a blind approach of program testing [20], the results of actual random testing experiments confirmed its effectiveness in revealing faults [8,16]. We believe that a key advantage of RT over other techniques is that it selects objectively the test data by ignoring the specification or the structure of the program under test. When the input domain of a program is the Cartesian product of some finite numeric domains, building a Uniform Random Test Data Generator (URTG) is trivial but when the input domain is formed of data structures or infinite domains, the task is more complex [2]. For the sake of simplicity, we shall confine ourselves to a simple input domain made of the Cartesian product of bounded intervals of integers. Extensions will be considered in the conclusion of the paper. In this section, we recall the principle of URTG (Sec.3.2.1) and explain why performing Random Testing over a hypercuboid is a simple task (Sec.3.2.2). We end this section by giving two invariance properties of RT on which our approach is based (Sec.3.2.3). 3.2.1 Uniform Random Test data Generation (URTG) An RTG is Uniform when each point of the input domain of a program has the same probability to be chosen. However, it is well known that uniformity can only be approximated on deterministic machines [9]. Most of the time, pseudo-random numbers generators make use of linear congruent rules such as \( x_n = (a_1x_{n-1} + a_2x_{n-2} + \ldots) \mod m \) to generate numbers. Thus, generating a \( n^{th} \) number is not independent of previous generations. Nevertheless, these pseudo-random number generators behave well in practice (not far from uniformity) and so they suffice for our purpose. The design of such generators is outside the scope of this paper and a complete and recent survey of this topic can be found in [9]. 3.2.2 Random Testing over a hypercuboid The input domain of the program under test is formed by the Cartesian product of bounded intervals of integers. Technically, such an input domain is called a hypercuboid, which is the \( n \)-dimensional extension of the 3-dimensional cuboid. Performing random testing based on a uniform distribution over a hypercuboid domain is simple as any of its points can be randomly chosen by selecting its coordinates independently. Let us assume a two-dimensional input space \((x, y)\), then RT can be implemented by selecting \(x\) at random and then \(y\) at random, without paying attention on the value obtained for \(x\). 3.2.3 Two invariance properties of RT Our PRT approach makes use of two fundamental invariance properties of uniform generators. The first property states that a uniform random generator for a given domain \(D\) can also serve as a uniform generator for any of the subdomains of \(D\). More formally: **Property 1 (First invariance property)** Let \(S\) be a sequence of uniformly distributed tuples of values for a domain \(D\), then for any subset \(D'\) of \(D\), it is always possible to extract from \(S\) a sequence \(S'\) of uniformly distributed tuples for \(D'\). **Proof:** Let \(S = \{x_1, ..., x_N\}\) be a set of \(N\) points uniformly distributed over \(D\). Then, the probability to draw \(x_i\) from \(D\) is the same for each \(i\) and if \(S' = \{x_{i_1}, ..., x_{i_M}\}\) is the set of \(M\) points that belong to both \(S\) and \(D'\), then \(S'\) is also uniformly distributed over \(D'\). Extracting such a sequence from \(S\) can be done simply by rejecting the tuples that do not belong to \(D'\). The remaining sequence \(S'\) is still uniformly spread out over \(D'\) as \(D'\) is a subset of \(D\). Of course, the smaller \(D'\) w.r.t. \(D\), the larger the uniform sequence for \(D\) must be. The second property states that a URTG can be built in a hierarchical manner: **Property 2 (Second invariance property)** Let \(D\) be a domain of \(N\) tuples, let \(K\) be a divisor of \(N\) and \(D_1, ..., D_K\) be a partition of \(D\) such that each \(D_i\) possesses the same number of tuples, then a uniform random sequence for \(D\) can be built by generating first a uniform random sequence over \(D_1, ..., D_K\), and then picking up a single tuple in each \(D_i\), at random. **Proof:** Each tuple of $D$ has the probability $1/N$ to be drawn. The probability to draw $D_i$ from $D_1, \ldots, D_K$ is $1/K$ and as $D_1, \ldots, D_K$ is an equi-partition of $D$, then each $D_i$ possesses $N/K$ tuples. Hence, each tuple resulting from the proposed process has the probability $1/K \times 1/(N/K) = 1/N$ to be drawn. The important point here is that all the domains $D_i$ have the same number of tuples. Whenever $K$ is given and $N$ cannot be divided by $K$, then it is possible to consider instead the smallest integer greater than $N$ that can be divided by $K$. This remark is necessary in our context, as explained below. ## 4 Constraint Reasoning in PRT Path-oriented Random Testing aims at finding a test suite that uniformly exercises a selected control flow path. We propose using constraint reasoning to build efficiently such a test suite. Constraint reasoning usually involves two interleaved processes in order to get a solution of a constraint system: constraint propagation and variable labeling. Constraint propagation prunes the variation domain of variables by eliminating inconsistent values while labeling tries to infer solutions by elaborating hypothesis and refuting subdomains. The key point of our approach is to employ constraint propagation (Sec.4.1) to find a hypercuboid that over-approximate the solution set of the path conditions, and to exploit constraint refutation (Sec.4.2) to remove spurious subdomains. We now turn on the description of these processes. ### 4.1 Constraint propagation **The process.** Constraint propagation introduces constraints from the path conditions into a propagation queue. Then, an iterative algorithm manages each constraint one by one into this queue by filtering the domains of variables of their inconsistent values. When the variation domain of variables is large, filtering algorithms consider usually only the bounds of the domains for efficiency reasons: a domain $D = \{v_1, v_2, \ldots, v_{n-1}, v_n\}$ is approximated by the range $v_1..v_n$. When the domain of a variable is pruned then the algorithm reintroduces in the queue all the constraints where this variable appears, in order to propagate this information. The algorithm iterates until the queue becomes empty, which corresponds to a state where no more pruning can be performed. When selected in the propagation queue, each constraint is added into a constraint–store which memorizes all the considered constraints. The constraint–store is contradictory if the domain of at least one variable becomes empty. In this case the corresponding path is shown to be non-feasible. Efficiency and completeness. When considering only the bounds of domains, constraint propagation is really very efficient as it runs in $O(m)$ where $m$ denotes the number of constraints [17]. But, it is worth noticing that constraint propagation alone does not guarantee satisfiability. In fact, constraint propagation just tries to prune the variation domain and it does not test for satisfiability. For example, consider the following constraint system over finite domains: $x \in 1..100, y \in 1..100, z \in 1..100, x = y \times z, x < z \times y$. Here, constraint propagation does not perform any pruning on the domains, although the constraint system is clearly unsatisfiable. Hopefully, these situations are infrequent in practice and inconsistent subdomains can often be discarded. Note that computing the exact solution set of integer constraints over bounded domains is NP-hard [17]. Hypercuboids. Constraint propagation over finite domain variables computes hypercuboids: each variable of an $n$-dimensional space belongs to a range $\text{Min..Max}$ of values. Sometimes values can be removed from ranges such as in the presence of disequality constraints (e.g. $X \neq a$) but we will ignore such removals as our ultimate goal is to build a URTG and not to solve the constraints. In the example of Fig.1, constraint propagation permits to get the hypercuboid $D_1 = (x \in 0..1, y \in 51..100)$ where $D_1$ is an over-approximation of the solution set of the path conditions $x \in 0..100, y \in 0..100, y > x + 50 \land x \times y < 60$. 4.2 Constraint refutation Constraint refutation is the process of temporarily adding a constraint to a set of constraints and testing whether the resulting constraint system has no solution by using constraint propagation. If the resulting constraint system is unsatisfiable, the added constraint is shown to be contradictory with the rest of the constraints and then it is refuted. When constraint propagation does not yield to a contradiction, then nothing can be deduced as constraint propagation is not complete in general. Based on constraint addition/removal and propagation, this process is very efficient and it can be exploited in PRT to test domain intersection: let $D$ be a subdomain defined by a set of constraints and $C$ be a constraint, checking whether $D \cap C = \emptyset$ is true can be done by adding constraint $C$ to $D$ and test whether $C$ is refuted or not. An example of such a refutation was given in the motivating example of the paper. 5 PRT based on constraint reasoning In this section, we detail our divide-and-conquer algorithm to perform PRT based on constraint reasoning. Firstly, we detail how to fairly divide the hypercuboid resulting from constraint propagation (Sec. 5.1) and secondly, we explain how to exploit constraint refutation to prune the subdomain associated to the path conditions (Sec.5.2). Finally, we show how our algorithm can exploit these processes to build an efficient URTG for PRT (Sec.5.3). 5.1 Dividing the hypercuboid Applying constraint propagation on the path conditions results in a hypercuboid that is a correct approximation of the solution set of the path conditions. Using this approximation to define a URTG for PRT is possible but not optimal. We propose a new way of refining this hypercuboid in smaller subdomains. It is worth noticing that special attention must be paid to the way this hypercuboid is broken into subdomains in order to preserve the uniformity of the generator. Let $k$ be a given parameter, called the division parameter, our method is based on the division of each variable domain into $k$ subdomains of equal area. When the size of a domain variable cannot be divided by $k$, then we enlarge its domain until its size can be divided by $k$. By iterating this process over all the $n$ input variables, we get a fair partition of the (augmented) hypercuboid, in $k^n$ subdomains. Consider the constraint set $\{y \geq 0, x \leq 14, x > y\}$ that corresponds to the triangle domain shown on the left in Fig.3. We will use this example in the rest of the paper to present our approach. Constraint propagation over these constraints gives $$D = (x \in 0..14, y \in 0..14).$$ Consider a division parameter equal to 4. Then we have to divide the rectangle domain $x \in 0..14, y \in 0..14$ into $4^2 = 16$ subdomains of equal area. But, 4 does not divide $4^1$ 15, therefore we enlarge the domain of $x$ and the domain of $y$ with a single value each. As a result, we get the 16 following subdomains: $D_1 = (x \in 0..3, y \in 12..15)$, $D_2 = (x \in 0..3, y \in 8..11),...D_{16} = (x \in 12..15, y \in 0..3)$ that form a partition of the (augmented) hypercuboid $D' = (x \in 0..15, y \in 0..15)$, as shown on the right in Fig.3. --- 4 There are 15 values in each variable domain 5.2 Pruning the hypercuboid As said previously, constraint refutation can be used to test efficiently domain intersection. Thus, we eliminate parts of the hypercuboid that are inconsistent with the path conditions. For the triangle domain, we can safely eliminate \( D_1, D_2, D_3, D_5, D_6 \) and \( D_9 \) by using constraint refutation. For example, \( D_1 = (x \in 0..3, y \in 12..15) \) does not intersect the triangle domain, as \( x > y \) does not hold in \( D_1 \). As all the subdomains have the same area, we can still build an uniform test data generator for the resulting subdomain \( D' = D_4 \cup D_7 \cup D_8 \cup D_{10} \cup \ldots \cup D_{16} \). On this example, we eliminated 6 subdomains over 16. By using the second invariance property, we get an easy way to draw uniformly test data. It suffices to draw at random a subdomain in \( D' \) and then to draw at random a value in this subdomain. This process is explained below. Thanks to the invariance properties, uniformity is preserved. Note that building a URTG from subdomains of distinct areas is also possible by sampling \( D_1, \ldots, D_k \) with probability proportional to the sizes of each \( D_i \), but using uniform partitions is simpler. Another advantage of constraint refutation is that it can detect non-feasible paths. Recall that non-feasible paths correspond to unsatisfiable constraint systems. Hence, when all the subdomains of the partition are shown to be inconsistent, then it means that the corresponding path is non-feasible. This contrasts with RT approaches such as Adaptive RT [5] or Feedback-directed RT [21] which cannot detect non-feasible paths. Note however that our approach can fail to detect some non-feasible paths due to the incompleteness of constraint propagation. 5.3 A divide-and-conquer algorithm We present an algorithm that performs PRT based on constraint reasoning. The algorithm takes as inputs a set of variables along with their variation domain, \( PC \) a constraint set corresponding to the path conditions of the selected path, \( k \) the division parameter, and \( N \) the length of the expected random sequence. The algorithm returns a list of \( N \) uniformly distributed random tuples that all satisfy the path conditions. The list is empty when the corresponding path is detected as being non-feasible. Firstly, the algorithm partitions the hypercuboid resulting from constraint propagation in \( k^n \) subdomains of equal area (\texttt{Divide function}). Then, each subdomain \( D_i \) in the partition is checked for unsatisfiability. This results in a list of subdomains \( D'_1, \ldots, D'_p \) where \( p \leq k^n \). Secondly, a URTG is built from this list by picking up first a subdomain and then picking up a tuple inside this subdomain. If the selected tuple does not satisfy the path conditions then it is simply rejected. This process is repeated until a sequence of \( N \) test data is generated. This algorithm is semi-correct, meaning that when it terminates, it is guaranteed to provide the correct expected result, but it is not guaranteed to terminate. Indeed, in the second loop, $N$ is decreased iff $t$ satisfies $PC$, which can happen only if $PC$ is satisfiable. In other words, if $PC$ is unsatisfiable and if this has not been detected by constraint propagation ($p \geq 1$), then the algorithm will not terminate. Note that similar problems arise with random testing or path testing as nothing prevents a unsatisfiable goal $PC$ to be selected and, in this case, all the test cases will be rejected. In practice, a time out mechanism is necessary to enforce termination. This mechanism is not detailed here but it is mandatory on actual implementations. Note that any testing tools that execute programs should be equipped by such a timeout mechanism as nothing prevents a tested program to activate an endless path. Algorithm 1: Path-oriented Random Testing **Input**: $(x_1, \ldots, x_n), PC, k, N$ **Output**: $t_1, \ldots, t_N$ or $\emptyset$ (non-feasible path) $T := \emptyset$ $(D_1, \ldots, D_{kn}) := \text{Divide}\{x_1, \ldots, x_n\}, k$; forall $D_i \in (D_1, \ldots, D_{kn})$ do if $D_i$ is inconsistent w.r.t. $PC$ then remove $D_i$ from $(D_1, \ldots, D_{kn})$ end end Let $D'_1, \ldots, D'_p$ be the remaining list of domains; if $p \geq 1$ then while $N > 0$ do Pick up uniformly $D$ at random from $D'_1, \ldots, D'_p$; Pick up uniformly $t$ at random from $D$; if $PC$ is satisfied by $t$ then add $t$ to $T$; $N := N - 1$ end end return $T$; Our algorithm generates a sequence of uniformly spread out test data that activate a selected path of the program. 6 Experimental results 6.1 Our PRT and RT implementations We implemented Path-oriented Random Testing (PRT) with constraint reasoning and compared it with Random Testing (RT). Both implementations take path conditions and domains as input parameters and provide a uniform random test suite as a result. To be fair, both implementations (PRT and RT) make use of the same random number generator (AS 183 algorithm from Wichmann and Hill [23]) and the same path condition evaluation scheme, under the form of Prolog constraints. The PRT implementation additionally exploits the SICStus Prolog library \texttt{clp(fd)} which offers constraint propagation and labeling heuristics. Both implementations (RT and PRT) and all our experiments are available online\textsuperscript{5}. PRT also comes with an additional parameter \( k \) which is the division parameter defined in Sec. 5.1. When \( k = 1 \), the input domain is not divided and constraint refutation is applied only once on the entire domain. When \( k > 1 \), the constraint refutation part of our divide-and-conquer algorithm is applied on various subdomains of the input domain and permits sometimes to prune the size of the input domain. 6.2 Programs to be tested We evaluated PRT w.r.t. RT on several programs: the \texttt{foo} program given in Fig.1, the \texttt{power} program given in Fig.2, the \texttt{trityp} program that is part of the Software Testing folklore and two real-world programs coming from the Civil and Military Aerospace domain. \texttt{tcas} is extracted from the Traffic alert and Collision Avoidance System (TCAS) which is a computerized avionics device designed to reduce the danger of mid-air collisions between aircrafts. From the Software-artifact Infrastructure Repository (Do et al. 2005), it is possible to download a C component, called \texttt{tcas.c}, of a preliminary version of TCAS. This freely and publicly available component is (modestly) made up of 173 lines of C code. Finally, \texttt{ardeta} is a C program belonging to a large application designed to connect electronic equipment for military aircrafts on a test bench airplane. This program is made of 1305 lines of code. Both source codes contain nested conditionals, logical operators, bit-level operators, type definitions, macros and function calls but no floating-point variables, loops, pointers or dynamically allocated structures. All the experimental results were computed on a 2.4GHz Intel Core Duo with 2GB of RAM. \textsuperscript{5} \url{www.irisa.fr/lande/gotlieb/resources/PRT} 6.3 Experiments on the \texttt{foo} program Fig. 4 reports on the results obtained for the path 1→2→3→4→5 in the \texttt{foo} program by regularly increasing the desired length of the random test suite. Fig.4 shows the number of test data generated with the PRT approach with four distinct values of the division parameter and traditional RT. For example, the first column shows that the number of rejects of the RT method is 9392−50 = 9342 test data while it only evaluates to 88−50 = 38 with PRT when \( k = 1 \), 15 with PRT when \( k = 2 \), and so on. <table> <thead> <tr> <th>Requested</th> <th>50</th> <th>100</th> <th>150</th> <th>200</th> <th>250</th> <th>300</th> <th>350</th> <th>400</th> <th>450</th> <th>500</th> </tr> </thead> <tbody> <tr> <td>RT</td> <td>9392</td> <td>17800</td> <td>26206</td> <td>30859</td> <td>42852</td> <td>51184</td> <td>61034</td> <td>69690</td> <td>77274</td> <td>82669</td> </tr> <tr> <td>PRT (( k = 1 ))</td> <td>88</td> <td>180</td> <td>280</td> <td>351</td> <td>432</td> <td>495</td> <td>589</td> <td>718</td> <td>821</td> <td>840</td> </tr> <tr> <td>PRT (( k = 2 ))</td> <td>65</td> <td>132</td> <td>187</td> <td>263</td> <td>311</td> <td>387</td> <td>461</td> <td>534</td> <td>586</td> <td>644</td> </tr> <tr> <td>PRT (( k = 3 ))</td> <td>54</td> <td>119</td> <td>186</td> <td>254</td> <td>294</td> <td>352</td> <td>412</td> <td>460</td> <td>531</td> <td>576</td> </tr> <tr> <td>PRT (( k = 4 ))</td> <td>53</td> <td>114</td> <td>159</td> <td>216</td> <td>280</td> <td>325</td> <td>381</td> <td>457</td> <td>502</td> <td>556</td> </tr> </tbody> </table> Figure 4. Length of the test suite generated for \texttt{foo} In PRT with \( k = 2 \), a single subdomain over 4 is shown to be unsatisfiable, whereas 5 subdomains over 9 with \( k = 3 \) and 11 subdomains over 16 are shown to be unsatisfiable with \( k = 4 \). When the requested length of the test suite is less than 500, the CPU time required to get a uniform random test suite (including unsatisfiability detection) is always less than 1sec. Next experience will study CPU time on longer test suites. The results of Fig.4 show that the probability of rejecting test data (those that do not satisfy path condition) decreases whenever the division parameter increases. For example, PRT with \( k = 1 \) requires 840 test data for producing 500 test data that cover the selected path while PRT with \( k = 4 \) only requires 556 test data for the same task. Fig. 5 shows the CPU time required to generate longer suites of random test data on the \texttt{foo} program. When the requested length is 35000, more than 10 Million <table> <thead> <tr> <th>Requested</th> <th>5000</th> <th>10000</th> <th>15000</th> <th>20000</th> <th>25000</th> <th>30000</th> <th>35000</th> </tr> </thead> <tbody> <tr> <td>RT</td> <td>27.5s</td> <td>55.5s</td> <td>82.5s</td> <td>111.1s</td> <td>139.4s</td> <td>158.3s</td> <td>159.4s</td> </tr> <tr> <td>PRT (( k = 1 ))</td> <td>0.08s</td> <td>0.17s</td> <td>0.34s</td> <td>0.42s</td> <td>0.61s</td> <td>0.73s</td> <td>0.98s</td> </tr> <tr> <td>PRT (( k = 2 ))</td> <td>0.06s</td> <td>0.19s</td> <td>0.32s</td> <td>0.53s</td> <td>0.80s</td> <td>1.06s</td> <td>1.37s</td> </tr> <tr> <td>PRT (( k = 3 ))</td> <td>0.06s</td> <td>0.19s</td> <td>0.32s</td> <td>0.53s</td> <td>0.77s</td> <td>1.03s</td> <td>1.34s</td> </tr> <tr> <td>PRT (( k = 4 ))</td> <td>0.06s</td> <td>0.17s</td> <td>0.32s</td> <td>0.53s</td> <td>0.76s</td> <td>1.03s</td> <td>1.34s</td> </tr> </tbody> </table> Figure 5. CPU time required for generating test suite for \texttt{foo} test cases are generated and evaluated for the RT implementation. The results show that PRT in any version is almost two order magnitude better than traditional RT on this example. One can object that traditional RT may be directly implemented in C and the satisfaction of path condition may be checked by instrumentation during program execution. This would optimize the test data rejection process by saving the time required to keep track of contexts in our Prolog implementation, but this would have gained nothing but a constant factor on CPU time. Note however that the CPU time required by PRT with $k = 4$ becomes greater than the one required for PRT with $k = 1$ when the requested length is greater than 20000. This is due to the cost of constraint refutation on subdomains. Hence, the value of the division parameter $k$ appears to be a good choice for balancing between the number of generated test data and the CPU time required to get a test suite for a given length. 6.4 Experiments on the power program We selected path $(1\rightarrow 2\rightarrow 3\rightarrow 2)_{10000}\rightarrow 4\rightarrow 6)$ from the power program that iterates $10^4$ times in the loop, in order to evaluate PRT when larger number of constraints are involved in the constraint propagation and refutation process. Input variables were constrained to belong to $0..50000$ and the constraint $R1# = X \times R$ that computes the power of $X$ was replaced by $R1# = X \times R \mod 2$ to avoid the computation of big integers. In this experiment, the constraint solver has to manage more than 20000 constraints. The experimental results show that PRT with $k = 1$ generates a random sequence of 100 test data in 38.5sec of CPU time. Whenever $k = 2$, the time required is 38.8sec and 2 subdomains over 4 have been refuted. Whenever $k = 3$, the time required is 38.7sec and 6 subdomains over 9 are refuted and finally, when $k = 4$, 38.9sec are required and 12 subdomains over 16 are refuted. Hence, in all the cases, the CPU time required is similar. It is worth noticing that the constraint propagation step permitted to instantiate the second input parameter of power and then there was no reject at all. Hence, any randomly generated test data within the domain was accepted. The same request for the RT program never answers as the event $Y = 10000$ has a very low probability to happen. Note that each path has the same probability to be activated in power as each value of $Y$ in $0..50000$ yields to activate a distinct path. This experience shows that PRT can scale up when numerous constraints are involved. 6.5 Experiments on the trityp program For the trityp program, we manually extracted a list of 7 paths with their asso- ciated path condition, that covers all the decisions of the program. In this process, we did not pay attention to the feasibility of these paths, as many other structural testing tools. We confined the domain of input variables to be in $0..100$ and com- pared PRT and RT while generating random test suites of increasing lengths. The experimental results are given in Fig.6. Although the results show that PRT outperforms RT, they are not as good as we expected. Firstly, it is well known that RT cannot easily cover the all decisions criterion on the trityp program as several events have very low probability to happen. For example, generating a sequence of three equal tuples (equilateral triangle) is a rare event. Of course, similar drawbacks exist with the PRT approach. A randomly chosen value is not propagated throughout the constraint network as this would bias the uniformity of the generator. Secondly, we expected PRT to detect non-feasible paths among the paths selected to cover all decisions. But finding inconsistent subdomains requires the division parameter $k$ to be instantiated to 13. In this case, 469 subdomains are shown inconsistent over a total of 2197. Note that among the 7 paths, 4 are non-feasible. As the value of the division parameter $k = 13$ depends on the problem, we decided to avoid taking advantage of this knowledge and then we confined our experiments to small values of $k$. In theory, selecting greater values for $k$ would yield to increase the deductions as many additional subdomains will be tested for satisfiability and possibly discarded. But the time required to check satisfiability will also increase accordingly. In practice, selecting small values for $k$ (e.g. $kin1..4$) permits to maximize the gain by eliminating large subdomains while keeping an acceptable overhead. ### 6.6 Experiments on the tcas program For the tcas program, we selected the longest path of the function alt_sep_test. This path contains 18 function calls and several complex logical decisions. The input space of the function alt_sep_test is made of 12 global 32-bits unsigned integer variables. We arbitrarily restricted each input variable to belong to $0..1000$ in order to avoid undesirable effects at the bounds of domains in both the RT and PRT implementations. Hence, the input domain is of cardinality $1001^{12}$. The results we got for this program are given in Fig.7. Our results on the tcas example merely show a two-order magnitude improvement of PRT with $k = 1$ over RT. This is explained well by the fact that activating the longest path of the program is difficult as it corresponds to a small subdomain of the input space. Constraint propagation permits to prune drastically the search space on this example. By analyzing the results, we found that 28672 subdomains Figure 7. CPU time required for generating random test suite on program \textit{tcas} over 65536 were eliminated when $k = 2$. So, using the constraint refutation process on this example is useful and means that a tighter over-approximation can be automatically found. However, the CPU time required to prune the refuted subdomains, even if it stays constant when the requested length of the test suite increases, penalizes PRT with $k = 2$. <table> <thead> <tr> <th>Requested</th> <th>10</th> <th>20</th> <th>30</th> <th>40</th> <th>50</th> <th>60</th> <th>70</th> <th>80</th> <th>90</th> <th>100</th> </tr> </thead> <tbody> <tr> <td>\textbf{RT}</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>CPU time</td> <td>58.6s</td> <td>103.4s</td> <td>191.4s</td> <td>275.6s</td> <td>298.7s</td> <td>282.8s</td> <td>482.9s</td> <td>424.2s</td> <td>525.6s</td> <td>541.4s</td> </tr> <tr> <td>Test data</td> <td>185160</td> <td>328874</td> <td>609571</td> <td>866125</td> <td>949171</td> <td>925341</td> <td>1578769</td> <td>1388161</td> <td>1719640</td> <td>1772755</td> </tr> <tr> <td>\textbf{PRT ($k = 1$)}</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>CPU time</td> <td>0.7s</td> <td>0.7s</td> <td>1.0s</td> <td>1.4s</td> <td>1.4s</td> <td>1.8s</td> <td>2.3s</td> <td>2.4s</td> <td>2.7s</td> <td>3.4s</td> </tr> <tr> <td>Test data</td> <td>154</td> <td>179</td> <td>225</td> <td>320</td> <td>343</td> <td>483</td> <td>601</td> <td>624</td> <td>721</td> <td>864</td> </tr> <tr> <td>\textbf{PRT ($k = 2$)}</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>CPU time</td> <td>92.0s</td> <td>93.0s</td> <td>93.6s</td> <td>96.1s</td> <td>93.9s</td> <td>90.5s</td> <td>93.2s</td> <td>92.7s</td> <td>93.1s</td> <td>92.9s</td> </tr> <tr> <td>Test data</td> <td>30</td> <td>88</td> <td>133</td> <td>221</td> <td>236</td> <td>278</td> <td>329</td> <td>377</td> <td>523</td> <td>447</td> </tr> </tbody> </table> Figure 8. CPU time required for generating random test suite on program \textit{ardeta}. The results obtained on program \textit{ardeta} shown in Fig. 8 are similar to those obtained for \textit{tcas}, except that the time required to generate the requested test suites are similar when $k = 1$ and $k = 2$. In the second case, 2048 subdomains over 4096 are found to be inconsistent which corresponds to half of the entire input domain and the time required to find inconsistencies is small w.r.t. the CPU time required to generate the test suite. These results indicate that both constraint propagation and constraint refutation are useful and efficient in PRT on moderated-sized benchmarks. However, other experiments on larger benchmarks would be required to confirm these results. 6.7 Related work PRT is a technique that improves path testing by building a URTG that activates a single control flow path. We are not aware of any other technique that addresses the same problem in the context of software testing. However, in the context of hardware verification [7], the research work of Gogate and Dechter in [11] also aims at sampling the solutions of a constraint system uniformly at random. Their algorithm belongs to the class of Monte-Carlo algorithms that samples from the output of a generalized belief propagation algorithm which is a variation of what we called the rejection method in this paper. Nevertheless, their approach is dedicated to Constraint Satisfaction Problems where constraints are defined with tuples (e.g. if \( x \) and \( y \) belong to \( 1..2 \), constraint \( x \neq y \) is defined in extension as the tuples \( \{(1, 2), (2, 1)\} \)) and boolean satisfiability problems [12]. It seems uneasy to adapt these techniques to constraint systems extracted from path conditions as variables hold over large domains (e.g. 32-bits integer variables) and constraints are defined with formula instead of tuples. In addition, unlike the algorithm of Gogate and Dechter, our divide-and-conquer is non-intrusive, meaning that the constraint solver is used as a black-box. Note that the idea of exploiting constraint reasoning in Random Testing is not new. Chan et al. proposed in [4] several implementations of the Center of Gravity constraint as a way to improve Adaptive RT [5]. However, unlike other Random Testing approaches, PRT exploits constraint propagation and refutation to get a uniform sequence of test data that activate a selected path. Thanks to its usage of constraint reasoning, PRT is able to show in some cases that the path conditions have no solution and that the corresponding path is non-feasible. This is outside the scope of advanced RT techniques such as adaptive RT [5] or feedback-directed RT [21]. There exist tools that perform automatic test data generation for path testing. In PathCrawler [24], Williams et al. propose a randomized algorithm that generates test suites to cover the k-paths criterion by using symbolic execution and constraint propagation over finite domains. Godefroid, Klarlund and Sen independently followed a similar approach in the tools DART (Directed Automated Random Testing) [10] and CUTE [22]. They got very good experimental results on C programs extracted from real-world applications. Recently, the tool JPF-SE [1] was proposed in the context of software model checking to generate test data. This tool exploits various decision procedures to find test data for activating certain paths of Java programs. However, all these approaches generate a single test data for each considered path and their goal is to get the complete coverage of all the feasible paths of a program up to a certain limit. In [6], Collavizza and Rueher explored the capabilities of finite domain constraint solvers for testing Java programs. They showed that these solvers could be very efficient to generate a single test datum that satisfies the path conditions. We believe that PRT could be used to complement these approaches by generating a uniform sequence of test data to activate each path that is selected. This would certainly improve the fault revealing capabilities of these techniques as each path would be more thoroughly exercised. 7 Conclusion This paper introduced constraint reasoning in Path-oriented Random Testing, through the usage of constraint propagation and refutation over finite domains. We proposed a simple divide-and-conquer algorithm that permits to build efficiently a uniform sequence of test data exercising a selected path in the program under test. Although our approach was evaluated on a few benchmark programs only, we showed that Path-oriented Random Testing outperforms traditional RT on realistic examples. As discussed in the paper, we believe that Path-oriented Random Testing could be advantageously exploited in other path-oriented test data generation techniques to improve their fault-revealing capabilities. However, our approach is also currently limited to integer variables and dealing with programs that manipulate pointers and floating-point variables is indispensable to scale the approach up to realistic languages. This is challenging as it requires not only to solve constraints on these features but also to build uniform random test data generator for complex data structures, such as simple lists, circular lists, double-linked lists and so on. Acknowledgments Many thanks to Nicky Williams and the anonymous referees who provided us with helpful comments on an earlier draft of this paper. References
{"Source-Url": "https://inria.hal.science/inria-00540283/file/Gotlieb_Petit.pdf", "len_cl100k_base": 12782, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 59911, "total-output-tokens": 15129, "length": "2e13", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.0003180503845214844, "__label__crime_law": 0.00031280517578125, "__label__education_jobs": 0.0006651878356933594, "__label__entertainment": 7.140636444091797e-05, "__label__fashion_beauty": 0.00016772747039794922, "__label__finance_business": 0.0001983642578125, "__label__food_dining": 0.00032639503479003906, "__label__games": 0.0007700920104980469, "__label__hardware": 0.0011005401611328125, "__label__health": 0.0005049705505371094, "__label__history": 0.000255584716796875, "__label__home_hobbies": 0.00010031461715698242, "__label__industrial": 0.0003719329833984375, "__label__literature": 0.0003097057342529297, "__label__politics": 0.0002307891845703125, "__label__religion": 0.0004763603210449219, "__label__science_tech": 0.031768798828125, "__label__social_life": 9.453296661376952e-05, "__label__software": 0.00635528564453125, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.0003178119659423828, "__label__transportation": 0.0005540847778320312, "__label__travel": 0.00019872188568115232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52747, 0.06753]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52747, 0.51021]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52747, 0.87478]], "google_gemma-3-12b-it_contains_pii": [[0, 932, false], [932, 2844, null], [2844, 6401, null], [6401, 8904, null], [8904, 11309, null], [11309, 13668, null], [13668, 16342, null], [16342, 18950, null], [18950, 21587, null], [21587, 24391, null], [24391, 26421, null], [26421, 29391, null], [29391, 31130, null], [31130, 33686, null], [33686, 36762, null], [36762, 39540, null], [39540, 41982, null], [41982, 44189, null], [44189, 47400, null], [47400, 49504, null], [49504, 52263, null], [52263, 52747, null]], "google_gemma-3-12b-it_is_public_document": [[0, 932, true], [932, 2844, null], [2844, 6401, null], [6401, 8904, null], [8904, 11309, null], [11309, 13668, null], [13668, 16342, null], [16342, 18950, null], [18950, 21587, null], [21587, 24391, null], [24391, 26421, null], [26421, 29391, null], [29391, 31130, null], [31130, 33686, null], [33686, 36762, null], [36762, 39540, null], [39540, 41982, null], [41982, 44189, null], [44189, 47400, null], [47400, 49504, null], [49504, 52263, null], [52263, 52747, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52747, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52747, null]], "pdf_page_numbers": [[0, 932, 1], [932, 2844, 2], [2844, 6401, 3], [6401, 8904, 4], [8904, 11309, 5], [11309, 13668, 6], [13668, 16342, 7], [16342, 18950, 8], [18950, 21587, 9], [21587, 24391, 10], [24391, 26421, 11], [26421, 29391, 12], [29391, 31130, 13], [31130, 33686, 14], [33686, 36762, 15], [36762, 39540, 16], [39540, 41982, 17], [41982, 44189, 18], [44189, 47400, 19], [47400, 49504, 20], [49504, 52263, 21], [52263, 52747, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52747, 0.10163]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
806733ee88951bdddfbf5899c38888cde102a406
Higher-Order Value Flow Graphs Christian Mossin DIKU, University of Copenhagen** Abstract. The concepts of value- and control-flow graphs are important for program analysis of imperative programs. An imperative value flow graph can be constructed by a single pass over the program text. No similar concepts exist for higher-order languages: we propose a method for constructing value flow graphs for typed higher-order functional languages. A higher-order value flow graph is constructed by a single pass over an explicitly typed program. By using standard methods, single source and single use value flow problems can be answered in linear time and all source-all uses can be answered in quadratic time (in the size of the flow graph, which is equivalent to the size of the explicitly typed program). On simply typed programs, the precision of the resulting analysis is equivalent to closure analysis [10,11,8]. In practice, it is a reasonable assumption that typed programs are only bigger than their untyped equivalent by a constant factor, hence this is an asymptotic improvement over previous algorithms. We extend the analysis to handle polymorphism, sum types and recursive types. As a consequence, the analysis can handle (explicit) dynamically typed programs. The analysis is polyvariant for polymorphic definitions. Keywords: program analysis, type system, efficiency, polymorphism, recursive types, polyvariance. 1 Introduction Flow analysis of a program aims at approximating at compile-time the flow of values during execution of the program. This includes relating definitions and uses of first-order values (eg. which booleans can be consumed by a given conditional) but also flow of data-structures and higher-order values (eg. which function-closures can be applied at a given application). Values are abstracted by a label of the occurrence of the value — i.e. the label of first-order values uniquely identifies the occurrence of the value, while data-structures and closures are abstracted by the label of the constructor resp. lambda. Flow information is directly useful for program transformations such as constant propagation or fistification, and, by interpreting the value flow in an appropriate domain, for many other program analyses. Furthermore, information about higher-order value flow can allow first-order program analysis techniques to be applicable to higher-order languages. ** Universitetsparken 1, DK-2100 Copenhagen Ø, Denmark, e-mail: mossin@diku.dk We present a flow analysis for typed, higher-order functional languages. The analysis constructs a value flow graph which is linear in the size of the explicitly typed program. Single queries (single-source or single-sink data-flow) can be performed on this graph by standard reachability algorithms in linear time and, similarly, full flow information can be obtained in quadratic time. On simply typed programs our analysis is equivalent in strength to closure analysis [10,11] and the constraint based analysis of Palsberg [8]. Since explicitly typed programs are typically only a constant bigger than the underlying untyped program, this gives (under the assumption that all types are bounded by a constant) an asymptotic improvement over previously published algorithms. Independently of this work, Heintze and McAllester [4] developed a constraint based analysis with the same properties as our (i.e. linear time single query flow analysis of simply typed programs) — a careful examination reveals that the analyses are indeed similar. The presentations, however, are fundamentally different. In particular, Heintze and McAllester do not develop an explicit flow graph and the concept of types only occurs in the complexity argument. In our approach, types are an integral part of the analysed program. This makes our analysis easier to extend: in this paper we show how to deal with polymorphism (Heintze and McAllester give an ad hoc solution basically unfolding all polymorphic definitions), recursive types (Heintze and McAllester shows how lists can be handled, but it is not clear how to generalise to arbitrary recursive types) and sum types (not considered beyond lists by Heintze and McAllester). We add polymorphism, sums and recursive types by considering these constructs as new value/consumer pairs: abstraction/instantiation, injection/projection and fold/unfold. Being able to handle languages with sums and recursive types allows us to specify the analysis for languages with an explicit dynamic type system — thus making the analysis applicable for untyped as well as typed languages. The analysis is polyvariant for polymorphic definitions: monomorphic program analysis lose precision at function definitions by mixing information from different call-sites. Our analysis avoids this for arguments of polymorphic type. E.g., the map function of type $\forall \tau, \tau'. (\tau \rightarrow \tau') \rightarrow [\tau] \rightarrow [\tau']$ will be polyvariant in the elements of the lists (as suggested by $\tau$ and $\tau'$) but not in the function and list values themselves — this is exactly the degree of polyvariance we want, as the function is consumed by the map function and the resulting list structure is produced by the map function; hence, further polyvariance will not improve precision. If the map function is given a monomorphic definition, polyvariance is lost. It is true in general that the precision of the proposed analysis is dependent on the choice of type assignment. 2 Language We will start with a simply typed lambda calculus extended with booleans, pairs and recursion. The types of the language are $$t ::= \text{Bool} \mid t \rightarrow t' \mid t \times t'$$ We present the language using the type system of figure 1. In order to refer to subexpression occurrences, we assume that terms are labelled. We assume that labelling is preserved under reduction — hence, a label does not identify a single occurrence of a sub-expression, but a set of subexpressions (intuitively redexes of the same original subexpression). The semantics of the language is given by the reduction rules in figure 2. As usual we write \( \rightarrow^* \) for the reflexive and transitive closure of \( \rightarrow \). We assume for all expressions that bound and free variables are distinct, and that this property is preserved (by \( \alpha \)-conversion) during reduction. We will refer to abstractions, booleans and pairs as data, and applications, conditionals and `let \( (x, y) \) be \( (e, e') \)` as consumers — thus \( \beta \), \( \delta \)-if and \( \delta \)-let-pair reductions are data-consumptions. Data flow analysis seeks a safe approximation to possible consumptions during any reduction of a term. 3 Flow Graphs A typed flow graph for a type derivation $\mathcal{T}$ for an expression $e$ is a graph $(V, E)$ where $V$ is a set of nodes and $E$ is a set of edges. Each judgement $A \vdash e : t$ in $\mathcal{T}$ is represented by a set of nodes: one node for each constructor (Bool, $\times$, $\rightarrow$) in $t$. The node associated with the top type constructor of $t$ is named according to $e$ while the rest are anonymous (but still conceptually associated with this named node). Collections of nodes associated with different judgements are called multi-nodes (or just m-nodes) and are connected by collections of edges called cables which intuitively carry values of the appropriate type. It is convenient to think of such an m-node as a parallel “plug”. We will use $N$ for m-nodes and $n$ for single nodes. Each variable (bound or free) in the analysed expression $e$ will give rise to one variable m-node and each occurrence of a variable gives rise to a box m-node. Every other subexpression $e'$ of $e$ will give rise to a syntax m-node and a box m-node. The latter is referred to as the root of the graph associated with $e'$ and represents the result of evaluating $e'$. The set of edges between two m-nodes form a cable. To be precise, we define a $t$-cable as follows: 1. A Bool-cable is a single edge (wire): $\xrightarrow{1}$ 2. A $(t \rightarrow t')$-cable is $\xrightarrow{1, 2}$ where $\xrightarrow{1}$ is a $t$-cable, $\xrightarrow{2}$ is its flipped version and $\xrightarrow{2}$ is a $t'$-cable. 3. A $(t \times t')$-cable is $\xrightarrow{1, 2}$ where $\xrightarrow{1}$ is a $t$-cable and $\xrightarrow{2}$ is a $t'$-cable. By “flipped” we mean inverting the direction of all wires in the cable, but not changing the top to bottom order of wires. We will use $w$ for wires (edges) in $E$. Edges are also considered paths (of length one). Composition of paths $p_1$ and $p_2$ is written $p_1 \cdot p_2$. If $c$ is one of the following cables $\xrightarrow{w} \quad \xleftarrow{w} \quad \xrightarrow{w} \quad \xleftarrow{w}$ the edge $w$ is called the carrier of $c$. A path $w \cdot p \cdot w'$ is a def-use path if it starts from a data m-node, ends at a consumer m-node and $w$ and $w'$ are carriers. If $w$ is an edge in a cable $c$, we say that it is a forward edge if it has the same orientation as the carrier of $c$, and a backward edge if it has the opposite orientation. 3.1 Simple Types Figure 3 defines a function $\mathcal{G}$ from type derivations to typed flow graphs. Each right-hand side of the definition has a root m-node which is the m-node to be $\text{The reader might want to think of graphs as graphical representation of constraint sets: nodes } n \text{ as variable and edges from } n_1 \text{ to } n_2 \text{ as constraints } n_1 \leq n_2.$ \[ G(\textcolor{red}{A}, x : t \vdash x : t) = x \rightarrow \square \text{ where } \rightarrow \text{ is a } t\text{-cable} \] \[ G(\textcolor{red}{A} \vdash \lambda x e : t \rightarrow t) = \] \[ G(\textcolor{red}{A} \vdash \textcolor{blue}{e} \Rightarrow \textcolor{blue}{\bar{e}} : t) = \] \[ G(\textcolor{red}{A} \vdash (e, e') : t \times t) = \] \[ G(\textcolor{red}{A} \vdash \textcolor{blue}{\text{let}} (x, y) \text{ be } e \text{ in } e' : t) = \] \[ G(\textcolor{red}{A} \vdash \textcolor{blue}{\text{True}} : \text{Bool}) = \textcolor{blue}{\text{True}} \rightarrow \square \] \[ G(\textcolor{red}{A} \vdash \textcolor{blue}{\text{False}} : \text{Bool}) = \textcolor{blue}{\text{False}} \rightarrow \square \] \[ G(\textcolor{red}{A} \vdash \textcolor{blue}{\text{if}} e \text{ then } e' \text{ else } e'' : t) = \] \[ G(\textcolor{red}{A} \vdash \textcolor{blue}{\text{fix}} x e : t) = \] --- **Fig. 3.** Flow graphs for simply typed lambda calculus connected at recursive calls. Note that each data m-node generates a new carrier starting at the m-node and connects the sub-cables, while a consumer m-node terminates a carrier (and connects sub-cables). Furthermore, note that whenever two cables are connected, they have the same type. - The variable case constructs a root m-node and connects the (unique) vari- able m-node to the root. - The case for $\mathcal{G}(\frac{T}{A \vdash \lambda x.e : t \rightarrow t'})$ constructs a $\lambda'$ value m-node. A $t \rightarrow t'$ cable (call it $c$) leaves this m-node towards the root m-node of the graph indicating that the lambda itself is the result of the expression. A $t$ cable connecting to the argument sub-cable of $c$ goes towards the unique variable m-node for the bound variable. Similarly, a cable leaving the root of the graph for $e$ connects to the result sub-cable of $c$. In the case for $e @^l e'$ a $t \rightarrow t'$ cable $c$ enters a $@^l$ consumer m-node from the root of $\mathcal{G}(T)$ (where $T$ is the derivation for $e$ and $T'$ is the derivation for $e'$ — similar in the later cases). The root of $\mathcal{G}(T')$ is connected at the $@^l$ m-node to the argument sub-cable of $c$ and the result sub-cable of $c$ is connected to the root of the graph. - In the case for pairs, the cables from the roots of the components are com- bined at a $(\cdot, \cdot)^l$ value m-node. The cable leaving the $(\cdot, \cdot)^l$ has a pair and goes to the root. The case for `let $(x,y)$ be e in e' lets a pair cable from $\mathcal{G}(T)$ enter a `let $(\cdot, \cdot)^l$' consumer m-node and sends the sub-cables on to the (unique) m-nodes for the bound variables. The graph $\mathcal{G}(T')$ connects directly to the root (since $e'$ will usually have occurrences of $x$ or $y$, the graph will usually be connected). The dotted edge from the `let $(\cdot, \cdot)^l$ m-node to $\mathcal{G}(T')$ is only included to aid readability and is not part of the graph. - The cases for booleans and `if` should be straightforward — note that both branches of conditionals connect to the root, indicating that we do not know which branch will be taken. - Applying $\mathcal{G}$ to $\frac{T}{A \vdash \text{fix} x . e : t}$ connects $\mathcal{G}(T)$ to $x$ indicating that the result of evaluating $e$ is bound to $x$. We then connect $x$ to the root.\(^2\) Note, that the `fix` m-node is not connected to the graph — thus variable, data, consumer and box m-nodes suffice (the `fix` m-node is only included for readability). The interface of a graph $G = \mathcal{G}(\frac{T}{A \vdash e : t})$ consists of the root m-node of $G$ and the variable m-nodes $x$ where $x$ is free in $e$. Note that there is a one- to-one correspondence between type constructors in $A \vdash e : t$ and nodes in the interface of $G$. An occurrence of a type constructor in $x_1 : t_1, \ldots, x_n : t_n \vdash e : t$ is called positive (negative) if it occurs positively (negatively) in $t_1 \rightarrow \cdots \rightarrow$ \(^2\) Connecting $\mathcal{G}(T)$ to the root would have been equally good, but with this formulation we preserve the property that only variable m-nodes can have more than one outgoing cable. $t_i \rightarrow t$. A node occurs positively (negatively) in the interface of $\mathcal{G}(A \vdash e : t)$ if the corresponding type constructor occurs positively (negatively) in $A \vdash e : t$. Example 1. Applying $\mathcal{G}$ to the unique derivation of \[ \vdash \text{let}^{t_1} f(x) \text{ be } (\lambda^{t_2} y. y, \text{True}^{t_3})^{t_4} \text{ in } f^{t_5} x : \text{Bool} \] results in the following typed flow graph: ![Typed Flow Graph](image) The reader is encouraged to follow the def-use path from $\lambda^{t_2}$ to $@^{t_5}$ and the path from the $\text{True}^{t_3}$ to the root of the graph. 3.2 Polymorphism To add polymorphism, we extend the language of types as follows:\(^4\): \[ t ::= \tau \mid \text{Bool} \mid t \rightarrow t' \mid t \times t' \mid \forall \tau. t \] where we use $\tau$ for type variables. We add explicit syntax for generalisation and instantiation. The extension to the language is defined by the type rules \[ \forall \mathbf{I} \quad A \vdash e : t \quad (\tau \text{ not free in } A) \quad \forall \mathbf{E} \quad \frac{A \vdash e : \forall \tau. t'}{A \vdash e\{t\}^\tau : t'[t/\tau]} \] and the reduction rule: \[ (\beta) \quad (A^{t_4}. e)^{t_5} \rightarrow e \] where $A$ is data and $\{\}$ is a consumer. Intuitively, a type variable $\tau$ can carry any value since it might be instantiated to any type. For graphs, however, the opposite intuition is more fruitful: no value is carried along a $\tau$-cable, since, as long as the value has type $\tau$, it cannot be used. This approach relies on the same intuition as “Theorems for Free”, that a function cannot touch arguments of polymorphic type \([12]\). Thus a $\tau$-cable is no cable at all and the appropriate connections are made at the instantiation m-node. A $\forall \tau. t$ cable is a $t$-cable. \(^3\) Assume the syntax tree for a type $t$. If the path from the root of the tree to a type constructor $c$ (one of $\text{Bool}$, $\times$ or $\rightarrow$) follows the argument branch of $\rightarrow$ constructors an even (odd) number of times then $c$ is said to occur positively (negatively) in $t$. \(^4\) Since we are analysing programs that are already typed, System F gives a smoother presentation — the program might well be typed without using the full power (e.g. by allowing polymorphism only in let-bound expressions). Since $t$-cables and $\forall \tau . t$-cables are the same, a quantification m-node just passes on its incoming cable: \[ G(\Gamma) \vdash A[t : \forall \tau . t] = G(T) \rightarrow A' \rightarrow \square \] An instantiation m-node has an incoming $\forall \tau . t'$ cable and an outgoing $t'[t/\tau]$ cable. All wires of the $\forall \tau . t'$ cable are connected to the similar wires in the $t'[t/\tau]$ cable. The remaining edges of the $t'[t/\tau]$ cable form $t$ cables — these are connected such that negative occurrences of $t$ are connected to all positive occurrences of $t$. To be precise, assume that $t'$ has $n$ occurrences of $\tau$ and write the occurrences as $\tau^{(1)}, \ldots, \tau^{(n)}$ and the similar occurrences of $t$ in $t'[t/\tau]$ as $t^{(1)}, \ldots, t^{(n)}$. For any pair $\tau^{(i)}, \tau^{(j)}$ where $\tau^{(i)}$ occurs negatively and $\tau^{(j)}$ occurs positively in $t'$, add a $t$-cable from $t^{(i)}$ to $t^{(j)}$. **Example 2.** Consider \[ \lambda^{t_1} x . id \{ \text{Bool}^{y_2} \, @^{x_1} \text{True}^{x_2} \, \text{id} \{ \text{Bool} \times \text{Bool} \}^{z_3} \# \text{id}^{x_3} \}^{y_3} \eta_1 \eta_2 \eta_3 \eta_4 \eta_5 \eta_6 \eta_7 \eta_8 \eta_9 \eta_{10} \@^{y_1 x_1 z_1} \lambda^{x_1 x_2} \, x_1 x_2 \] where we assume that $id$ is given type $\forall \tau . \tau \rightarrow \tau$. The graph fragment for the argument $A^{t_1 x_1} \lambda^{x_1 x_2} \, x_1 x_2$ looks as follows: ``` \[ \lambda^{t_1} x . id \{ \text{Bool}^{y_2} \, @^{x_1} \text{True}^{x_2} \, \text{id} \{ \text{Bool} \times \text{Bool} \}^{z_3} \# \text{id}^{x_3} \}^{y_3} \eta_1 \eta_2 \eta_3 \eta_4 \eta_5 \eta_6 \eta_7 \eta_8 \eta_9 \eta_{10} \@^{y_1 x_1 z_1} \lambda^{x_1 x_2} \, x_1 x_2 \] ``` The dashed edges are not part of the graph and are only included to make the graph more readable since it would otherwise be completely unconnected\(^5\). The graph fragment for the applications of $id$ looks as follows: ``` (we have left out superfluous box-nodes). ``` \(^5\) While the dashed edges are not necessary to find def-use paths, they can be included if we want information about which values a given variable can be bound to. 3.3 Sum Types Sum types have the following syntax: \[ t ::= \text{Bool} \mid t \rightarrow t' \mid t \times t' \mid t + t' \mid 1 \] Again, we associate syntax with the type rules: \[ \frac{1}{A \vdash u' : 1} \quad +I \quad \frac{A \vdash e : t}{A \vdash \text{inl}(e) : t + t'} \quad \frac{A \vdash e : t'}{A \vdash \text{inr}(e) : t + t'} \\ \] \[ \text{+E} \quad \frac{A \vdash e : t + t' \quad A, x : t \vdash e' : t'' \quad A, y : t' \vdash e'' : t'''}{A \vdash \text{case}^e e \text{ of } \text{inl}(x) \Rightarrow e'; \text{inr}(y) \Rightarrow e'' : t'''} \] The reduction rules for the constructs are as follows: - (δ-case) \[ \text{case}^{t''} \text{inl}(e) \text{ of } \text{inl}(x) \rightarrow e'; \text{inr}(y) \rightarrow e'' \rightarrow e'[e/x] \] - \[ \text{case}^e \text{inr}(e) \text{ of } \text{inl}(x) \rightarrow e'; \text{inr}(y) \rightarrow e'' \rightarrow e''[e/y] \] where ‘\text{inl}’ and ‘\text{inr}’ construct data and ‘\text{case}’ is a consumer. To extend typed graphs with sum types, we first have to define cables carrying values of sum type: \[ A (t+t')\text{-cable is } \quad \frac{1}{\text{}} \quad \text{where } \frac{1}{\text{}} \text{ is a } t\text{-cable and } \frac{2}{\text{}} \text{ is a } t'\text{-cable.} \] The carrier cable represents the flow of the ‘\text{inl}’/’\text{inr}’ data. In figure 4 we extend the definition of \( \mathcal{G} \) with the new syntactic constructs. The constructs should be straightforward; the unit \( u' \) is treated like other constants, \( \text{inl}(e) \) and \( \text{inr}(e) \) connect the root of \( \mathcal{G}(T) \) to the appropriate sub-cable of the sum-cable — nothing flows into the other sub-cable. Finally, the case-construct decomposes the sum (in a manner similar to ‘let’ \((\cdot, \cdot)\)) and connects the branches to the root (similarly to ‘if’). 3.4 Recursive Types Recursive types add the ability to define integers, lists etc.: \[ t ::= \tau \mid \text{Bool} \mid t \rightarrow t' \mid t \times t' \mid t + t' \mid \mu \tau.t \] where we have retained the sum types from above (since they are required to make practical use of recursive types). Usually, recursive types are added to type systems by adding the equivalence \( \mu \tau . t = t[\mu \tau . t/\tau] \). We make applications of this equivalence explicit in the language by adding syntax with the following type rules: \[ \frac{\text{fold}}{A \vdash e : t[\mu \tau . t/\tau]} \quad \frac{\text{unfold}}{A \vdash \mu \tau . t} \quad \frac{A \vdash \text{fold}^\tau(e) : \mu \tau . t}{A \vdash \text{fold}^\tau(e) : t[\mu \tau . t/\tau]} \quad \frac{A \vdash \text{unfold}^\tau(e) : \mu \tau . t}{A \vdash \text{unfold}^\tau(e) : t[\mu \tau . t/\tau]} \] We consider ‘fold’ as data and ‘unfold’ as a consumer and add the reduction rule: $$(\delta\text{-rec})\ unfold''(fold'(e)) \rightarrow e$$ As with polymorphism, $\tau$ cables are empty — this makes even more sense with recursive types as any variable will ever have type $\tau$ and hence the values that a variable can evaluate to can be read from the graph even without $\tau$-cables. A $\mu\tau.t$ cable has to carry the information from all unfoldings of the type, hence we need a $t$ cable to carry the information of $t$ as well as instantiations with $\mu\tau.t$ of positive occurrences of $\tau$, and a flipped $t$ cable to carry the information of instantiations of negative occurrences of $\tau$. Similarly, we need a wire in each direction carrying ‘fold’ values. Thus A $\mu\tau.t$ cable is $\overset{2}{\rightarrow}$ where $\overset{1}{\rightarrow}$ is a $t$-cable, $\overset{1}{\leftarrow}$ is its flipped version and $\overset{2}{\leftarrow}$ is a $t$-cable. The forward single edge is the carrier and carries the label of the applied fold operation; the backward single edge carries the labels of all fold operation that can occur in argument position. Fold and unfold m-nodes are dual and parameterised by the recursive type involved. An unfold m-node has an incoming $\mu\tau.t$ cable and an outgoing $t[\mu\tau.t/\tau]$ cable. Let superscripts index the occurrences of $\tau$ in $t$ as in the polymorphic case. We connect the edges of the positive sub cable of the incoming cable to the nodes of the outermost \( t \) on the outgoing side. Furthermore, the incoming \( \mu \tau. t \) cable is connected to all \( \mu \tau.d^{(i)} \) directly if \( \tau^{(i)} \) is a positive occurrence of \( \tau \) in \( t \) and “switched” if \( \tau^{(i)} \) is a negative occurrence. The fold m-node has an incoming \( t[\mu \tau.d/\tau] \) cable and an outgoing \( \mu \tau.t \) cable. Connections are made similarly. **Example 3.** The ‘fold’ m-node for folding \((\mu \tau. \tau \rightarrow \tau) \rightarrow (\mu \tau. \tau \rightarrow \tau)\) to \( \mu \tau. \tau \rightarrow \tau \) and the dual ‘unfold’ m-node for unfolding \( \mu \tau. \tau \rightarrow \tau \) to \((\mu \tau. \tau \rightarrow \tau) \rightarrow (\mu \tau. \tau \rightarrow \tau)\) are given below. The type constructors are included to remind the reader of the kind of labels carried by the individual wires. ### 3.5 Dynamic Types For dynamic typing we need simple types plus the special type \( D \): \[ t ::= D \mid \text{Bool} \mid t \rightarrow t' \mid t \times t' \] The necessary extensions to the type system of figure 1 are given in figure 5 (for more details see Henglein [5]). The conversions \( \text{Bool}! \), \( \text{Fun}! \), \( \text{Pair}! \) correspond to tagging operations: they take an untagged value which the type system guarantees has a given type (eg. \( \text{Bool} \)) and throws it into the common pool of values about which the type system knows nothing. In this pool values have tags that can be checked at run time. Conversions \( \text{Bool}? \), \( \text{Fun}? \), \( \text{Pair}? \) check the tag of a value and provide the untagged value of which the type inference now knows the type. Using recursive types and sum types, we already have sufficient power to specify dynamic types. Type \( D \) is equivalent to \[ \mu \tau. ((\tau \rightarrow \tau) + ((\tau \times \tau) + \text{Bool})) \] The conversions are expressible in our language. E.g. \( [\text{Bool}]^l \) is \( \text{fold}^l \circ \text{inr}^l \circ \text{inl}^l \) and \( [\text{Bool}]^r \) is \( \text{outr}^l \circ \text{outr}^l \circ \text{unfold}^l \) where \( \text{outr}^l \) is a shorthand for \[ \lambda x.\text{case } x \text{ of } \text{inl}(y) \mapsto \text{error}; \text{inr}(z) \mapsto z \] (having different labels on the sum and recursive type operators would not give any additional information, so we just assume that they are the same). By the coding of dynamic types using sums and recursive types, we find that a D-cable consists of 6 forward and 6 backward edges (the 6 are \( \rightarrow, +, \times, +, \text{Bool}, \mu \)). Since the labels carried by the sum and \( \mu \) edges are the same, we can replace them with one forward and one backward edge carrying tag-labels: where the edges read from top to bottom carry labels of type: \( !, \rightarrow, \times, \text{Bool}, !, \rightarrow, \times \) and \( \text{Bool} \). Now the tagging and untagging m-nodes can be found by combining m-nodes for sum and recursive types. Example 4. The \([\text{Fun}]^l\) m-node (equivalent to \( \text{fold}^l \circ \text{inl}^l \)) is given in figure 6 (where the left-hand side is a D \( \rightarrow \) D cable and the right-hand side is a D cable). 3.6 Soundness A flow graph is a sound abstraction of the definitions and uses of data during evaluation of an expression \( e \), if (a) whenever there is a redex in \( e \), there is a path starting from the data being consumed and ending at the consumer and (b) reduction does not add extra paths. We need a lemma for the polymorphism case (\( T[t/\tau] \) means applying the substitution \( [t/\tau] \) to all judgements in the derivation). Lemma 5. Let \( \frac{T}{A \vdash e : t} \) be any type derivation. Let \( n \) be any node of \[ G = G(\frac{T}{A \vdash e : t'[t'/\tau]}) \] then if \( n \) occurs negatively (positively) in the interface of \( G \) and is a node of an occurrence of \( \ell' \) then for all \( p \) in \( G \) starting from \( n \), we have that \( p \) either terminates in \( G \) or \( p \) ends at some \( n' \) such that \( n' \) occurs positively (negatively) in the interface of \( G \) and \( n' \) is a node of an occurrence of \( \ell' \). **Proof.** By induction over \( \frac{T}{A \vdash e: \tau} \). We can now state and prove soundness for the system with polymorphism, sums and recursive types. Soundness for the dynamic system follows by the construction. **Theorem 6.** Let \( C, e_1, e_2, T_1, A, t \) be given such that \( \frac{T_1}{A \vdash C[e_1]: \tau} \) and \( e_1 \) reduces to \( e_2 \) by a \( \beta \) or \( \delta \) redex. Then there exists \( T_2 \) such that 1. \( \frac{T_2}{A \vdash C[e_2]: \tau} \) 2. if the redex lets \( l \) consume \( l' \) then there is a def-use path from m-node \( l' \) to m-node \( l \) in \( G(\frac{T_1}{A \vdash C[e_1]: \tau}) \) and 3. \( \{l_1, l_2\} \mid \text{there is a def-use path from } l_1 \text{ to } l_2 \text{ in } G(\frac{T_1}{A \vdash C[e_1]: \tau}) \} \[\supseteq \{l_1, l_2\} \mid \text{there is a def-use path from } l_1 \text{ to } l_2 \text{ in } G(\frac{T_2}{A \vdash C[e_2]: \tau}) \} **Proof.** Point 1 is standard subject reduction. We choose \( T_2 \) to be the result of standard cut-elimination on the proof tree. Let \( \frac{T'_2}{A \vdash e_1 : \tau'} \) and \( \frac{T'_2}{A \vdash e_2 : \tau'} \) be the subtrees of \( \frac{T_1}{A \vdash C[e_1] : \tau} \) resp. \( \frac{T_2}{A \vdash C[e_2] : \tau} \) for \( e_1 \) and \( e_2 \). It is easy to see that the interfaces of \( G_1 = G(\frac{T'_1}{A \vdash e_1 : \tau}) \) and \( G_2 = G(\frac{T'_2}{A \vdash e_2 : \tau}) \) are identical. We will be sloppy and use the same name for similar nodes in the interfaces of the two graphs. For each possible redex we prove that (a) if the redex lets \( l \) consume \( l' \) then there is a def-use path from m-node \( l' \) to m-node \( l \) in \( G_1 \), (b) \( \{l_1, l_2\} \mid \text{there is a def-use path from } l_1 \text{ to } l_2 \text{ in } G_1 \} \[\supseteq \{l_1, l_2\} \mid \text{there is a def-use path from } l_1 \text{ to } l_2 \text{ in } G_2 \} (c) if \( n, n' \) are two node in the interface of \( G_1 \) (and hence in \( G_2 \)) then if there is a path from \( n \) to \( n' \) in \( G_2 \) then there is a path from \( n \) to \( n' \) in \( G_1 \). The individual cases are proven as follows: \[\langle \lambda' x.e \rangle @ e' \rightarrow e[e'/x] \] By considering the graphs for redex and reduct \[G' \Rightarrow G^* \Rightarrow \] \[G' \Rightarrow \] \[G' \Rightarrow \] where $G^*$ equals $G$ where cables from box $m$-nodes corresponding to occurrences of $x$ are replaced with cables from the copies of $G'$. Point (a) follows immediately. Since labels are preserved by copying $G'$, the removal of the $I'$-$I$ def-use path is the only change concerning existence of paths in the graphs (if $x$ does not occur in $G$ then all paths in $G''$ disappear as well). Points (b) and (c) follows. "if' True' then $e$ else $e' \rightarrow e"$ Again consider the graphs for redex and reduct Point (a) is obvious, and since paths are only removed and not added, (b) and (c) follows. The second reduction rule for 'if' is similar. "let' $(x, y)$ be $(e', e'')'$ in $e \rightarrow e[e'/x][e''/y]"$ Follows like above by considering the redex and reduct: "fix' $x.e \rightarrow e[\text{fix' } x.e/x]\"$ Again by examining graphs: "$(A', \tau).e \{t\}' \rightarrow e"$ Note that by the type rule, $\tau$ is not free in $A$. Now the fact follows from lemma 5. "case' inl'(e'') of inl(x) $\rightarrow e$; inr(y) $\rightarrow e' \rightarrow e[\text{e''/x}]\"$ Again examine the redex and reduct (note that $y$ is not free in $e$): The second reduction rule is similar. “unfold” (fold) \( \rightarrow \) e. We have the following situation: $$\begin{array}{c} G \\ \xrightarrow{1} \text{ (fold)} \\ \xrightarrow{2} \text{ (unfold)} \\ \xrightarrow{3} \square \end{array}$$ Point (a) is trivial. Assume that the recursive type involved is \( \mu \tau.t \) where \( t \) has \( n \) occurrences of \( \tau \). We use superscripts to identify occurrences of \( \tau \) and \( t \), such that \( \xrightarrow{1} \) is a \( t^{(1,0)}[\mu \tau.t^{(1,0)}/\tau^i] \) cable (where \( i \in \{1, \ldots, n\} \)), \( \xrightarrow{2} \) is a \( \mu \tau.t^{(2,1)} \) cable and \( \xrightarrow{3} \) is a \( t^{(3,0)}[\mu \tau.t^{(3,0)}/\tau^i] \) cable. We will identify types and collections of nodes, and refer to the sets of nodes of forward and backwards sub-cables (including the \( \mu \) edge) as \( t^{+(i,i)} \) and \( t^{-(i,i)} \) resp. (where \( i \in \{1, \ldots, 3\} \) and \( i \in \{1, \ldots, n\} \)). We can then summarize the connections of the redex as: \[ \begin{align*} \tau^{(1,0)} & \Rightarrow t^{(2,1)} \\ t^{+(1,0)} & \Rightarrow t^{+(2,1)} & \text{if } \tau^{(i)} \text{ is a positive occurrence in } \tau \\ t^{-(2,1)} & \Rightarrow t^{-(1,0)} & \text{if } \tau^{(i)} \text{ is a positive occurrence in } \tau \\ t^{-(2,1)} & \Rightarrow t^{+(1,0)} & \text{if } \tau^{(i)} \text{ is a negative occurrence in } \tau \\ t^{-(1,0)} & \Rightarrow t^{+(2,1)} & \text{if } \tau^{(i)} \text{ is a negative occurrence in } \tau \\ t^{+(2,1)} & \Rightarrow t^{(3,0)} \\ t^{+(2,1)} & \Rightarrow t^{+(3,0)} & \text{if } \tau^{(i)} \text{ is a positive occurrence in } \tau \\ t^{-(3,0)} & \Rightarrow t^{-(2,1)} & \text{if } \tau^{(i)} \text{ is a positive occurrence in } \tau \\ t^{(2,1)} & \Rightarrow t^{-(3,0)} & \text{if } \tau^{(i)} \text{ is a negative occurrence in } \tau \\ t^{+(3,0)} & \Rightarrow t^{-(2,1)} & \text{if } \tau^{(i)} \text{ is a negative occurrence in } \tau \end{align*} \] In the reduct we have the connections \[ \begin{align*} \tau^{(1,0)} & \Rightarrow t^{(3,0)} & \text{if } \tau^{(i)} \text{ is a positive occurrence in } \tau \\ t^{-(3,0)} & \Rightarrow t^{+(1,0)} & \text{if } \tau^{(i)} \text{ is a positive occurrence in } \tau \\ t^{+(1,0)} & \Rightarrow t^{-(3,0)} & \text{if } \tau^{(i)} \text{ is a negative occurrence in } \tau \\ t^{+(3,0)} & \Rightarrow t^{+(1,0)} & \text{if } \tau^{(i)} \text{ is a negative occurrence in } \tau \end{align*} \] which can all be found by transitivity of the connection in the redex. (Note that the redex contains more connections, eg. \( t^{(1,0)} \Rightarrow t^{+(3,0)} \) corresponding to the expected loss of information.) Points 2. and 3. of the lemma follows by induction on the structure of \( G \). \[\square\] 4 Discussion 4.1 Algorithm So far we have only discussed the construction of flow graphs. It should be clear, however, that the usual flow problems ("Where is a given occurrence of a value used?", "Which values are used by a given consumer?" and the full flow problem) can be solved by standard reachability algorithms. Single data or single consumer problems can be answered in linear time while the full flow problem can be answered in quadratic time. In the same way, we can answer questions such as "which values can a given expression evaluate to?", or "which values can a given variable be bound to?", though some care should be taken for the latter query in the polymorphic case (as mentioned in section 3.2). We can leave out wires for polymorphism, sums, recursiveness etc. if tracing this is not necessary. Further, if only what is usually known as control flow information\(^6\) is needed then more can be left out. A \(\mu T, t\)-cable does not need the backward t-cable and \(\mu\)-edge if there are no negative occurrences of \(\tau\) in \(t\). It is also possible to reduce the size of graphs by eliminating unnecessary nodes: most box-nodes and the quantification nodes can immediately be re- moved, but if we are only interested in part of the flow (e.g. only higher-order flow) more can be removed. By reachability (see Ayers [2]) we can remove cables in 'if' and 'case' provided the flow graph can prove that all values destructed will be True or all values will be False (resp. \(\text{inl}()\) or \(\text{inr}()\)). Furthermore, if we are only interested in the input/output behaviour, the graph can be reduced to depend only on the size of the interface (i.e. constant under assumption that types are bounded). This is useful if further polyvariance is desired as cloning the graphs for a definition will be cheap (this option is discussed by Heintze and McAllester [4]). 4.2 Precision It was shown in [6] that the simply typed fragment of this analysis (that is the analysis presented in section 3.1) was equivalent to closure analysis of the same program. The equivalence extends in a straightforward manner to sum types. Assume that \(e\) is polymorphically typed. Our analysis can yield better results than closure analysis when analysing \(e\): using the polymorphic type information gives us a degree of polyvariance. Consider the identity function \(\text{id} = \lambda^t x.x\) of type \(\forall \tau . \tau \to \tau\) and assume two call sites \(\text{id@}\text{true}^t\) and \(\text{id@}\text{false}^t\). Closure analysis will find that the result of both applications can be \(l_3\) or \(l_5\). Our analysis will create two instantiation nodes and hence keep the applications separate: the result will be \(l_3\) for the first application and \(l_5\) for the second. If the identity function was given type \(\text{Bool} \to \text{Bool}\) instead, we would get the same (sub- optimal) result as closure analysis. This loss of polyvariance would also arise if the identity was specified as \(\lambda^t x.x\) if \(t\) \(x\) then \(x\) else \(x\) since this can only be given a monomorphic type. Consider recursive types. Our analysis is potentially as good as closure ana- lysis. Heintze shows that flow analysis based on recursive types is equivalent to closure analysis [3]. It is not difficult to show that our analysis is equivalent to recursive type based analysis, except that the base types are chosen in advance. --- \(^6\) We prefer to think of this as value flow of control structures This implies that for every untyped term $e$, there exists a type derivation for $e$ such that the resulting graph contains information equivalent to closure analysis. **Example 7.** Consider lists. The usual type for lists of element of type $t$ is $\mu \tau.((t \times \tau) + 1)$. Using this type, $[True^{\ell_1}, False^{\ell_2}]$ is shorthand for $$\text{fold}((\text{inl}((True^{\ell_1}, \text{fold}(\text{inl}(\text{fold}(u^{\ell_1}(u^{\ell_1}))^{\ell_1}))))^{\ell_1}))$$ (where we have left out some unnecessary labels). Using this type for lists, our analysis will find that $\text{fst}([True^{\ell_1}, False^{\ell_2}])$ (where $\text{fst}$ is shorthand for $\lambda x.\lambda y. \text{case unfold}(x) \rightarrow y_1$; $\text{inr}(z) \rightarrow \text{error})$ in $f$ can result in $l_1$ as well as $l_2$. Using a different type for lists (such as $(t \times \mu \tau.((t \times \tau) + 1) + 1)$ will yield the precise result in this case. ### 4.3 Related Work Independently of this work, Heintze and McAllester developed a flow analysis which has the same complexity as ours on simply typed programs [4] (on untyped programs it might not terminate). Their analysis is derived by transforming the constraint formulation of flow analysis [8] to avoid computation of dynamic transitive closure. For each node $n$ there are potentially nodes $\text{dom}(n)$ and $\text{ran}(n)$. Due to the boundedness of standard types, the number of nodes is similarly bounded — this corresponds directly to our $m$-nodes. Types, however, are treated fundamentally different in their presentation. While types is an integrated part of the generation of our flow graphs, it only appears in the proof of termination/complexity of their algorithm. We believe that this difference make it difficult to extend their system. In particular: **Polymorphism** Since their algorithm does not need the types, it is directly applicable to polymorphically typed programs. Complexity results, however, are preserved only if the monotypes of the expanded program is bounded by a constant. While this is not unreasonable for let-polymorphic languages, our solution preserves the complexity results if the size of types of the original program is bounded by a constant. Furthermore, we get polyvariance for free. **Recursive types** Heintze and McAllester shows how their analysis can be extended to handle lists. It is not clear, however, how to generalise to arbitrary recursive types — for types $\mu \tau. t$ where $\tau$ occurs negatively in $t$ the backward cable is a necessary and non-obvious extension which does not fit easily into their framework. As a consequence their analysis is not applicable to dynamically typed programs (as mentioned above, termination is not guaranteed if the analysis is applied to untyped terms). We find that our improvements rely heavily on our formalism and in particular on the availability of types at graph construction time. The notion of well-balanced path by Asperti and Laneve [1] corresponds directly to the usual constraint formulation of flow analysis. Asperti and Laneve refine the concept of well-balanced path to legal path which captures exactly the set of virtual redexes in a term — a notion of exact flow analysis useful for optimal reduction. If we place a restriction similar to legality on the paths of our graphs, we will obtain an exact flow analysis — as a direct consequence of the precision, the analysis will be non-elementary recursive. We hope to find a formulation of the legality restriction, that is amenable to abstraction and describe analyses strictly more precise than closure analysis, but with a non-prohibitive complexity. In particular, we hope to find the first-order analysis 5 Conclusion We have presented a notion of flow graph for higher-order programs. Under as- sumption that the size of all types is bounded by a constant, the resulting flow analysis presents an asymptotic improvement over earlier work. Heintze and McAllester have independently of this work reported similar results, but in con- trast to their work, our approach handles recursive types and is hence applicable to untyped programs (if a dynamic type discipline is enforced). Furthermore, we handle polymorphism in a more general way which entails a desirable degree of polyvariance. Acknowledgements This paper extends work presented in my Ph.D.-thesis [6]. I would like to thank my supervisor Fritz Henglein for many rewarding discussions and my official opponents, Alex Aiken, Nils Andersen and Peter Sestoft for their comments and suggestions. References 1. A. Asperti and C. Laneve. Paths, computations and labels in the λ-calculus. In 3. N. Heintze. Control-flow analysis and type systems. In A. Mycroft, editor, Sym- posium on Static Analysis (SAS), volume 983 of LNCS, pages 189–206, Glasgow, 1995. 4. N. Heintze and D. McAllester. Control-flow analysis for ML in linear time. In University of Copenhagen, January 1997. International Colloquium on Trees in Algebra and Programming (CAAP), volume
{"Source-Url": "http://web.cs.ucla.edu/~palsberg/tba/papers/mossin-njc98.pdf", "len_cl100k_base": 11941, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 122311, "total-output-tokens": 13678, "length": "2e13", "weborganizer": {"__label__adult": 0.000423431396484375, "__label__art_design": 0.0003812313079833984, "__label__crime_law": 0.0003724098205566406, "__label__education_jobs": 0.0006160736083984375, "__label__entertainment": 7.677078247070312e-05, "__label__fashion_beauty": 0.00018262863159179688, "__label__finance_business": 0.0001939535140991211, "__label__food_dining": 0.0005130767822265625, "__label__games": 0.0006351470947265625, "__label__hardware": 0.0008454322814941406, "__label__health": 0.0007300376892089844, "__label__history": 0.00030493736267089844, "__label__home_hobbies": 0.00010246038436889648, "__label__industrial": 0.0004727840423583984, "__label__literature": 0.0003893375396728515, "__label__politics": 0.0003609657287597656, "__label__religion": 0.000675201416015625, "__label__science_tech": 0.02227783203125, "__label__social_life": 0.00010025501251220704, "__label__software": 0.0033359527587890625, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.0004124641418457031, "__label__transportation": 0.0006899833679199219, "__label__travel": 0.0002363920211791992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43060, 0.01894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43060, 0.33091]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43060, 0.83575]], "google_gemma-3-12b-it_contains_pii": [[0, 2499, false], [2499, 5714, null], [5714, 6749, null], [6749, 9554, null], [9554, 10527, null], [10527, 13738, null], [13738, 16106, null], [16106, 18283, null], [18283, 21014, null], [21014, 22450, null], [22450, 24439, null], [24439, 26369, null], [26369, 29118, null], [29118, 30309, null], [30309, 33238, null], [33238, 36588, null], [36588, 39697, null], [39697, 42389, null], [42389, 43060, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2499, true], [2499, 5714, null], [5714, 6749, null], [6749, 9554, null], [9554, 10527, null], [10527, 13738, null], [13738, 16106, null], [16106, 18283, null], [18283, 21014, null], [21014, 22450, null], [22450, 24439, null], [24439, 26369, null], [26369, 29118, null], [29118, 30309, null], [30309, 33238, null], [33238, 36588, null], [36588, 39697, null], [39697, 42389, null], [42389, 43060, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43060, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43060, null]], "pdf_page_numbers": [[0, 2499, 1], [2499, 5714, 2], [5714, 6749, 3], [6749, 9554, 4], [9554, 10527, 5], [10527, 13738, 6], [13738, 16106, 7], [16106, 18283, 8], [18283, 21014, 9], [21014, 22450, 10], [22450, 24439, 11], [24439, 26369, 12], [26369, 29118, 13], [29118, 30309, 14], [30309, 33238, 15], [33238, 36588, 16], [36588, 39697, 17], [39697, 42389, 18], [42389, 43060, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43060, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
579a0eb745dd2e63c115a69a9655c0378c52d98f
Inferring Crypto API Rules from Code Changes Rumen Paletov ETH Zurich, Switzerland rumen.paletov@gmail.com Veselin Raychev DeepCode AG, Switzerland veselin@deepcode.ai Petar Tsankov ETH Zurich, Switzerland ptsankov@inf.ethz.ch Martin Vechev ETH Zurich, Switzerland martin.vechev@inf.ethz.ch Abstract Creating and maintaining an up-to-date set of security rules that match misuses of crypto APIs is challenging, as crypto APIs constantly evolve over time with new cryptographic primitives and settings, making existing ones obsolete. To address this challenge, we present a new approach to extract security fixes from thousands of code changes. Our approach consists of: (i) identifying code changes, which often capture security fixes, (ii) an abstraction that filters irrelevant code changes (such as refactorings), and (iii) a clustering analysis that reveals commonalities between semantic code changes and helps in eliciting security rules. We applied our approach to the Java Crypto API and showed that it is effective: (i) our abstraction effectively filters non-semantic code changes (over 99% of all changes) without removing security fixes, and (ii) over 80% of the code changes are security fixes identifying security rules. Based on our results, we identified 13 rules, including new ones not supported by existing security checkers. CCS Concepts · Security and privacy → Systems security: Cryptanalysis and other attacks; Software security engineering; Keywords Security, Misuse of Cryptography, Learning 1 Introduction Many critical data breaches nowadays are caused by an incorrect use of crypto APIs. Developers often fail to understand and correctly configure cryptographic primitives, such as cryptographic ciphers, secret keys, and hash functions, leading to severe security vulnerabilities that can be abused by attackers [15, 24]. As a result, from 100 inspected Android applications, researchers discovered severe man-in-the-middle attacks in 41 of them and were able to gather a large variety of sensitive data [14]. As another data point, researchers have defined 6 common types of mistakes in using crypto APIs and found that a staggering percentage, 88%, of thousands of analyzed Android applications have at least one of these mistakes [12]. To resolve this problem, we argue that developers must check their applications against an up-to-date and comprehensive list of security rules regarding potential misuses of crypto APIs. Unfortunately, creating and updating such a list can be quite challenging as security is a constantly moving target: crypto APIs evolve over time as security experts continue to discover new attacks against existing primitives. For example, researchers have recently discovered the first collision against the SHA-1 cryptographic hash function [30], and are now advising developers to shift to safer alternatives, such as SHA-256. This Work: From Code Changes to API Usage Rules. In this paper, we propose a new approach for learning the correct usage of an API based on code changes, which are readily available in public repositories today. We show that code changes that fix security problems are more common than changes that introduce them, i.e. most problems were introduced in the initial implementation, not in a fix. The immediate benefit of this idea is that we can produce meaningful results even when most developers misuse an API (as it happens with the Java Crypto API). For instance, even if most developers use an outdated, less-secure cryptographic primitive (e.g. SHA-1), our approach can identify, based on a couple of code changes, that developers are switching to a new, more secure primitive (e.g. SHA-256). Key challenge. Attempting to learn from code changes is difficult because many changes do not semantically affect how the API is used (e.g. they may be a syntactic re-factoring). To address this challenge, we develop an abstraction tailored to crypto APIs that can capture relevant security properties. Our abstraction captures the semantic features of how a code change affects crypto API usage (e.g., how argument type changes affect the cryptographic mode), which enables filtering of unrelated or non-semantic changes. **Application to the Java Crypto API.** We implemented an end-to-end system, called DiffCode, of our approach for learning semantic changes to the Java Crypto API. To demonstrate the effectiveness of DiffCode, we applied it to thousands of code changes collected from GitHub. DiffCode produces only few relevant changes that let us derive new security rules. Based on these, we created a new security checker for the Java Crypto API called CryptoChecker which has more rules than prior security checkers. For instance, a novel rule derived from data is to switch from the default Java provider to BouncyCastle. The reason is that BouncyCastle does not have a 128-bit key restriction [3]. **Main Contributions.** Our main contributions are: - A new data-driven approach that learns rules from code changes where the learned rules capture the correct usage of an API (Section 2). - An abstraction tailored to crypto APIs that captures the semantic structure of code changes while abstracting away syntactic details. This abstraction is essential to distilling thousands of concrete code changes into few semantically meaningful ones (Section 3). - An end-to-end system, called DiffCode, for discovering relevant code changes. Our system consists of: (i) a lightweight AST-based program analysis that supports (partial) code snippets, (ii) abstraction of crypto API usage changes, and (iii) filtering combined with clustering analysis to ease users in inspecting relevant code changes (Sections 4-5). - An extensive evaluation of DiffCode on the Java Crypto API. Using DiffCode, we identified 13 security rules, including several previously unknown ones (Section 6). We remark that while we focus on crypto APIs, the approach is general and can be applied to other types of APIs. ## 2 Overview of Approach We now present our approach to extracting information about API usages from thousands of code changes. At a high-level, our method is based on two key insights: Our first insight is to focus on code changes, which identify concrete fixes that developers have applied to the code. The old version of the program (before applying the change) often resembles incorrect (or, insecure) usage of the API. Our approach thus differs from existing statistical “Big Code”-type of approaches, which focus on discovering statistically common API usages (e.g., [27]). Such statistical approaches are bound to produce less meaningful results in settings where the majority of developers misuse the API, as is the case of crypto APIs. In contrast, our method can discover incorrect usage even when only few developers have applied a correct fix. Our second insight is to leverage program abstraction to derive meaningful, semantic information about code changes. This is necessary to avoid irrelevant syntactic code modifications, such as reactoring. We depict the flow of our learning approach in Figure 1. We now briefly describe its main steps. ### Step 1: Mining Code Changes The first step consists of collecting code changes from open-source repositories. Since we are usually interested in extracting usage changes for particular API classes, such as Cipher and SecretKeySpec from the Java Crypto API, we fetch only patches for classes that use the target API classes. We explain how we collected thousands of code changes for the Java Crypto API, which we used in our experiments, in Section 6. Step 2: Abstract Usage Changes via Static Analysis. Program abstraction is a key element in our approach that enables us to distill semantic security fixes from thousands of code changes. Developers often commit patches that refactor the code, e.g. to improve readability and performance, without making semantic changes to the target API classes. Program abstraction can be used to discover that such syntactic modifications do not result in semantic changes to the program and how it uses the API. DiffCode downloads both the old and the new versions of the program and statically analyzes each version. It first discovers all different usages of a target API and extracts semantic features about each usage. These features capture which methods are invoked on objects of the API as well as information about the arguments passed to these methods. A usage change is then identified by the change in these features. For instance, suppose that in the old version the program creates an object of type Cipher by calling getInstance() and passing "AES" as an argument. Further, suppose that in the new version the program creates the same object with a call to getInstance() with arguments "AES/CBC" and an initialization vector object. DiffCode would detect this as a semantic change: the program changes the mode of the AES cipher from default Electronic Code Book mode (ECB) to the more secure Chain-Block Cipher mode (CBC). We describe the program abstraction we use in Section 3 and how it is derived using static analysis in Section 5.1. Step 3: Filter and Cluster Usage Changes. The usage changes derived from each project are collected and processed together. Thanks to the abstraction, DiffCode filters out usage changes that are not semantic fixes. For example, if features are neither added nor removed for a particular API usage, then the usage code is likely a refactoring and is thus filtered. We apply additional filters to remove duplicate usage changes and changes that either only add or remove features, as these often correspond to adding a new usage of the API or removing an existing one. In the beginning, DiffCode starts with tens of thousands of code changes and after the filtering step, there are only 186 remaining usage changes. Yet, we checked that this filtering step does not remove previously known security-related rules. Then, DiffCode uses a classic hierarchical clustering algorithm on these 186 changes (Section 4) and produces clusters that correspond to security-related rules. At this stage, we manually inspected the clusters and devised security checks that we encoded into a tool called CRYPTOCHECKER. The last step in DiffCode is manual for several reasons. First, we did not focus on automating this last step as it involves inspecting only tens of clusters of changes. Second, we manually inspect, document, and explain the derived rules to users. Finally, we remove false positives that introduce security problems as opposed to fix it – in fact, these are easy to filter out, even automatically, because there are fewer commits in clusters that introduce problems than in clusters that fix them. Overall, we derived 13 security rules, some of which are new rules. The new rules are currently not included in existing security checkers for crypto API misuse. Our rules are described in Section 6. 3 Abstraction for API Changes We now present our abstraction for representing the semantic structure of security fixes applied to crypto APIs. We present the terminology, then discuss an abstraction that given (a single version of) a program returns a set of API usage. Finally, given two program versions, we show how to leverage the abstraction in order to capture API usage changes (which may correspond to actual fixes). In the sections that follow we show how to leverage this abstraction for learning security rules. 3.1 Example We first present an example which we later use to illustrate our definitions. In Figure 2(a) we show the code patch for a Java class called AESCipher. The code lines removed from the old version are shown in red (and marked with -) and the added lines are shown in green (and marked with +). This class creates two objects of type Cipher, enc, and dec, which are used for encryption and, respectively, decryption. Old Version. The old version of AESCipher creates the two objects enc and dec using the method getInstance with the string "AES" passed as an argument. The instance enc is then initialized using init with arguments Cipher.ENCRYPT_MODE, an integer constant defined in the Cipher API, and key, which represents the symmetric key to be used for encryption. The object dec is also initialized with key, but this time the class uses the Cipher.DECRYPT_MODE constant. New Version. In the new version, the developer changes the signature of setKey, which now also takes as argument the object iv of type String. Further, the objects enc and dec are initialized using the string "AES/CBC/PKCS5Padding" (instead of "AES"). With this change, the developer explicitly expresses that the two ciphers must use the AES cipher in Chain Block Cipher mode (CBC) as well as the PKCS5 padding scheme. When the two Cipher objects are initialized, the developer passes as argument the object ivSpec of type IVParameterSpec to define the initialization vector that the ciphers must use for the first block they process. The ```java class AESCipher { Cipher enc, dec; final String algorithm = "AES"; final String algorithm = "AES/CBC/PKCS5Padding"; protected void setKey(Secret key) { byte[] bytes; IvParameterSpec ivSpec; try { ivBytes = Hex.decodeHex(iv.toCharArray()); ivSpec = new IvParameterSpec(ivBytes); enc = Cipher.getInstance(algorithm); enc.init(Cipher.ENCRYPT_MODE, key); dec = Cipher.getInstance(algorithm); dec.init(Cipher.DECRYPT_MODE, key); } catch (...) { } } } ``` (a) Code changes to two objects (enc and dec) of type Cipher (b) Usage DAG of the object enc before the change (c) Usage DAG of object enc after the change (d) Removed (red) and added (green) features that capture the usage change of object enc. Figure 2. Code changes to two objects of the type Cipher and the usages change derived for object enc. The developer also adds lines 11 and 12 to initialize the ivSpec object using the string iv passed as argument to the method. **The Need for Abstraction.** If we consider this example purely syntactically, the lines that call Cipher.getInstance remain unchanged. At the same time, most of the syntactic changes are related to renaming the setKey method and introducing an extra parameter. However, if we perform the right program analysis before comparing the two versions, we can abstract the semantically relevant changes for each of the Cypher objects and concisely capture the semantics of the change. In later sections, we explain how DiffCode learns from the example in Figure 2(a) by illustrating the steps on Figures 2(b), 2(c), and 2(d). ### 3.2 Basic Notation and Terminology Before presenting our abstraction, we describe our notation and terminology and define what we mean by API usage. **Types and Methods.** We restrict our attention to crypto APIs for languages such as Java that support base types (e.g., `int`, `byte`, `int[]`, `byte[]`) and object types which are stored in the heap. We consider an API that defines a set of types $Types = \{ t_1, \ldots, t_n \}$. For instance, the Java Crypto API defines the type `Cipher`, a cryptographic cipher used for encryption and decryption. A method signature is given by $m([t_0], t_1, \ldots, t_k) : t_{ret}$ where $t_0$ is the type of the object on which the method is invoked (the this object), $k$ is the method’s arity, each $t_i$ is the type of the $i$th argument, and $t_{ret}$ is the type of object/value returned by the method. Note that $t_0$ is defined only for non static methods. We write $Methods$ to denote the set of all methods. For a given type $t$, $Methods_t \subseteq Methods$ denotes the set of all methods that (i) accept an instance of type $t$ as an argument or (ii) create a new instance of type $t$. For example, the set $Methods_{IVParameterSpec}$ contains the method `Cipher.init(int, Key, AlgorithmParameterSpec)`, as it accepts objects of type `IVParameterSpec` as the third argument. Further, it contains `IVParameterSpec.<init>(byte[])`, which creates a new instance of type `IVParameterSpec`. Note that in addition to constructor methods, $Methods_t$ may also contain factory methods. For example, $Methods_{Cipher}$ contains factory method `Cipher.getInstance(String):Cipher`. **Program State.** We assume standard program semantics of an object-oriented language. A program state $\sigma \in States$... We will explain how to apply it to programs using static analysis in Section 5.1. We now present our abstraction which we use to capture the usage of a particular API type. Our abstraction consists of: (i) a heap abstraction, to represent the unbounded set of concrete objects with finitely many allocation sites, (ii) base-types abstraction, and (iii) per-object Cartesian abstraction that keeps track of method calls and abstract states associated with the abstract objects. We define this abstraction below. We will explain how to apply it to programs using static analysis in Section 5.1. Heap Abstraction. Since a program may instantiate a potentially unbounded number of objects of a given type, we use a per-allocation-site abstraction. That is, each constructor/factory method, such as Cipher.getInstance(String), results in one abstract object identified by the statement’s label. We denote by $AObjs$ the set of abstract objects. We use $T_{obj} \in AObjs$ to represent that the allocation of the abstract object is unknown; e.g., the allocation of the method parameter key is not defined in Figure 2(a). Concrete API Usages. A standard way to define a concrete usage of a given type $t$ is to collect the set of all method calls to an object of type $t$ together with the program states associated at each method call; cf. [27]. Note that a program usually defines multiple objects of the same type $t$ that are then used in different ways, resulting in multiple concrete usages of type $t$. We define the concrete usages as the map: $$CUuses: Objs \rightarrow P(\text{Methods} \times \text{States}).$$ That is, for a given object $o \in Objs$ of type $t$, the set of pairs $CUuses(o) = \{(m_1, \sigma_1), \ldots, (m_n, \sigma_n)\}$ contains the constructor/factory method $m_i \in \text{Methods}_t$ used to create $o$ as well as methods $m_j \in \text{Methods}_t$ that take as argument the object $o$. A method $m$ may be invoked multiple times with object $o$ at different program states, resulting in multiple pairs $(m, \sigma_1), \ldots, (m, \sigma_k)$ in $CUuses(o)$. We note that the concrete usages $CUuses$ for a given program will typically not be computable in practice as the program may allocate an unbounded set of objects and may have an unbounded number of states. Our abstraction, defined below, allows us to capture these unbounded sets with a finite set of abstract usages. ### 3.3 Abstraction of API Usage We now present our abstraction which we use to capture the usage of a particular API type. Our abstraction consists of: (i) a heap abstraction, to represent the unbounded set of concrete objects with finitely many allocation sites, (ii) base-types abstraction, and (iii) per-object Cartesian abstraction that keeps track of method calls and abstract states associated with the abstract objects. We define this abstraction below. We will explain how to apply it to programs using static analysis in Section 5.1. **Base types.** - **int** - $Ints(P) \cup \{T_{int}\}$ - **int[]** - $IntArrays(P) \cup \{T_{int[]}\}$ - **string** - $Strs(P) \cup \{T_{str}\}$ - **string[]** - $StrArrays(P) \cup \{T_{str[]}\}$ - **byte** - $\{\text{const\_byte}, T_{\text{byte}}\}$ - **byte[]** - $\{\text{const\_byte\[]}, T_{\text{byte\[]}}\}$ **Figure 3. Abstract base-type values for a program $P$.** A heap abstraction, to represent the unbounded set of concrete objects with finitely many allocation sites, is a finite set of abstract objects $\text{aObjs} \subseteq AObjs$, an abstract heap $\sigma^a: AObjs \times \text{Fields} \rightarrow AVals$, and abstract state of local variables $\Delta^a: \text{Vars} \rightarrow AVals$. We denote by $AStates$ the set of all abstract states. Abstract API Usage. We lift our notion of concrete usages to abstract usages, denoted by $AUses$, where instead of tracking the usage of each object we track the usage of abstract objects. Further, instead of collecting the concrete states at method calls we collect abstract states. Formally, abstract usages are captured with a map: $$AUses : AObjs \rightarrow \mathcal{P}(\text{Methods} \times \text{AStates}) .$$ That is, for a given abstract object $o^a \in AObjs$ of type $t$, we obtain a set $AUses(o^a) = \{(m_1, \sigma^a_1), \ldots, (m_k, \sigma^a_k)\}$ where $m_i \in \text{Methods}_t$ is a method and $\sigma^a_i$ is an abstract state. Each $AUses(o^a)$ defines one abstract usage, while together all abstract objects in a program define the set of all abstract usages. We note that since there are finitely many abstract objects, methods, and abstract states, the abstract usages for a given program are also finitely many. 3.4 Abstract Usages as Directed Acyclic Graphs Given the abstract usages defined by $AUses$ and an abstract object $o^a$, we construct a rooted directed acyclic graph (DAG) $G = (N, E, r)$ with nodes $$N \subseteq (\text{Methods} \times \text{AStates}) \cup (\mathbb{N} \times \text{AVals}) ,$$ edges $E \subseteq N \times N$, and root $r = (0, o^a) \in N$. Each node in the DAG is either a pair $(m, \sigma^a)$ of a method $m$ and an abstract state $\sigma^a$ or a pair $(i, a)$ where $i \in \mathbb{N}$ represents an argument index and $a \in \text{AVals}$ is an abstract value (i.e., an abstract object or base-type value). We label the nodes in the graph as follows. A node $(m, \sigma^a)$ is labeled by the signature of $m$. A node $(i, a)$ is labeled by $(i, a)$ if $a$ is an abstract base-type value (e.g., $\top$); otherwise, $a$ is an abstract object and the node is labeled by $(i, \text{type}(a))$, where $\text{type}(a)$ returns the type of $a$ (e.g., Cipher). Examples of these DAGs are given in Figures 2(b) and 2(c). We depict node labels as $\text{arg1} : \text{AES}$ instead of $(1, \text{AES})$ to emphasize that 1 represent an argument index. Further, we omit the index 0 in root node labels (as roots always have index 0). Below we explain how these DAGs are constructed. Constructing a DAG. To construct the graph for an abstract object $o^a$, we first add the root $(0, o^a)$. Then, starting from the root, the tree is iteratively constructed by performing the following two steps on each node $(i, abs)$: 1. For each $(m, \sigma^a) \in AUses(abs)$, we add a node $(m, \sigma^a)$ and an edge $(i, (i, abs), (m, \sigma^a))$. 2. For each node $(m, \sigma^a)$ created in step (1), we add up to $k$ children where $k$ is the arity of $m$. First, for each parameter $p_i$ of $m$, we add a node $(i, abs_j)$ where $abs_j = \Delta^a(p_i)$. Then, we add an edge $(m, \sigma^a), (i, abs_j)$ if it does not introduce a cycle in the graph. The above steps are iteratively performed in a breath-first manner by first expanding the root node (depth 0 of the rooted DAG). Then, we process the nodes $(i, abs)$ at depth 2 of the DAG where $abs \in AObjs$ is an abstract object such that $abs \neq \top$. Note that we skip the nodes at depth 1 as they contain only method nodes. We continue this process until a fixed depth $n$ (in our experiments, we set $n$ to 5). Example. To illustrate the graph construction we refer to the example in Figure 2. The code after the change (green and white lines in Figure 2(a)) has two abstraction objects of type Cipher — one allocated at line 13 and another one at line 16. Figure 2(c) depicts the graph constructed for the abstract object $l_{13}$ (i.e., the enc object). The root node of the graph is $(0, l_{13})$, and it is labeled by $(0, \text{Cipher})$ because the type of $l_{13}$ is Cipher. The abstract usage of $l_{13}$ is given by $$AUses(l_{13}) = \{(\text{getInstance}, \sigma^a_{13}), (\text{init}, \sigma^a_{13})\} .$$ The root node therefore has two children nodes labeled by getInstance and init, respectively. Node getInstance has one child $(1, \text{AES/CBC/PKCS5Padding})$ because $$\Delta^a_{l_{13}}(\text{algorithm}) = \text{AES/CBC/PKCS5Padding} .$$ Node init has three children. The first one is $(1, \text{ENCRYPT_MODE})$, where $\text{ENCRYPT_MODE}$ is an integer constant defined in Cipher. The second child is $(2, \top)$ and is labeled by $(1, \text{Secret})$. This child has no further children as $\Delta^a_{l_{13}}(\text{key}) = \top$ (the allocation of object key is not defined in the code). Finally, the third child is $(3, l_{12})$ and is labeled by $(3, \text{IVParameterSpec})$. The abstract object $l_{12}$ is recursively expanded with the constructor method $\text{<init>}$ and it’s argument $\text{I\_byte[]}$. The distance between the DAGs is $$\Delta^a_{l_{13}}(\text{algorithm}) = \text{AES/CBC/PKCS5Padding} .$$ Note that we skip the nodes at depth 1 as they contain only method nodes. We continue this process until a fixed depth $n$ (in our experiments, we set $n$ to 5). 3.5 From DAGs to Usage Changes To capture the semantic meaning of a code change with respect to a type $t$, we derive the abstract usages $AUses_1$ and $AUses_2$ for the old and, respectively, the new version of the program. We proceed in three steps, as depicted in Figure 4. First, for each version, we derive all rooted DAGs for all abstract objects of type $t$, as explained in Section 3.4. We note that we may obtain multiple DAGs for each version (determined by the number of allocation sites of objects of type $t$ in a version). Second, we pair the DAGs of the old version with those of the new version based on a distance metric (defined below) that captures the similarity between two DAGs. Finally, given a pair of DAGs, we derive features that describe the semantic change between the two usages. We refer to these features as usage change. The result of the three steps above is a set of usage changes. Distance Between DAGs. We first define a metric that reflects the distance between two DAGs $G_1 = (N_1, E_1, r_1)$ and $G_2 = (N_2, E_2, r_2)$. The distance between the DAGs is given by a intersection-over-union measure over the sets of nodes: $$\text{dist}(G_1, G_2) = 1 - \frac{|N_1 \cap N_2|}{|N_1 \cup N_2|} .$$ This measure reflects the change in terms of nodes in the DAGs that differ in the two graphs while also respecting the edges. For instance, for the DAGs $G_1$ and $G_2$ depicted in Figures 2(b) and 2(c), respectively, we get $dist(G_1, G_2) = \frac{1}{2}$. **Pairing DAGs.** Suppose that the DAGs derived from the old version are $V_{\text{old}} = \{A_1, \ldots, A_n\}$ and those derived from the new version are $V_{\text{new}} = \{B_1, \ldots, B_k\}$; we depict these as old/new version DAGs in Figure 4. For simplicity, we assume that $|V_{\text{old}}| = |V_{\text{new}}|$, i.e. the two versions have an equal number of DAGs; if this is not the case, we extend the version with fewer DAGs with DAGs of the form $G = ((r), \emptyset, (r))$, which only contain a root node $r$ labeled with the type $t$. We solve a maximum matching problem to map the DAGs in $V_{\text{old}}$ to unique DAGs in $V_{\text{new}}$ (and vice versa), such that we minimize the sum of the distance metrics of the paired graphs. The mapping is a bijection $m \subseteq V_{\text{old}} \times V_{\text{new}}$ whenever $|V_{\text{old}}| = |V_{\text{new}}|$. Formally, let $M$ be the set of all possible mappings. The distance for a given mapping $m \in M$ is given by: $$mdist(m) = \sum_{(G_1, G_2) \in m} dist(G_1, G_2).$$ To pair the DAGs in the two versions we find a minimum distance mapping according to $mdist(m)$. Note that there may be multiple such mappings. For the example provided in Figure 4, the mapping produces the pairs $(A_1, B_1), (A_2, B_2)$, and so forth. We color the nodes in green/red to emphasize which nodes in the paired DAGs are different. **From DAG Pairs to Usage Changes.** We use the following notation to define the derivation of usage changes. Let $G = (N, E, r)$ be a rooted DAG. Given two paths $p$ and $p'$, we write $p < p'$ if $p$ is a strict prefix of $p'$, i.e. the length of $p$ is strictly smaller than that of $p'$ and $p$ is a prefix of $p'$. For a set of paths $P$, we define $$\text{Shortest}(P) = \{ p \in P \mid \neg\exists p' \in P, p' < p \}.$$ That is, $\text{Shortest}(P)$ contains a path $p$ if and only if no other path is a strict prefix of $p$. For example, for the set of paths $$P = \{ a \to b, a \to b \to c, b \to c \},$$ we get $\text{Shortest}(P) = \{ a \to b, b \to c \}$. Given two DAGs $G_1$ and $G_2$, we define the shortest-removed paths of $G_1$ and $G_2$, denoted by $\text{Removed}(G_1, G_2)$, as: $$\text{Removed}(G_1, G_2) = \text{Shortest}(\text{Paths}(G_1) \setminus \text{Paths}(G_2)).$$ That is, $\text{Removed}(G_1, G_2)$ contains the shortest prefixes in $G_1$ that are not in $G_2$. We define the usage change between two DAGs $G_1$ and $G_2$ as a pair $\text{Diff}(G_1, G_2) = (F^-, F^+)$ where $F^- = \text{Removed}(G_1, G_2)$ and $F^+ = \text{Removed}(G_2, G_1)$. That is, the set of paths $F^-$ contains the shortest prefixes removed from $G_1$ while $F^+$ contains those that are added to $G_2$. In Figure 2(d) we show in detail the usage change derived from the DAGs depicted in Figures 2(b) and 2(c). **4 Clustering Semantic Usage Changes** In this section, we describe our approach for filtering and clustering the obtained usage changes, that is, the output of Figure 4 described earlier. Our filters will aim to eliminate any irrelevant, non-semantic usage changes. Then, the remaining semantic usage changes will be clustered so to ease users in inspecting them and eliciting security rules. **4.1 Extract Usage Changes** The input to the first step is a set of code changes. Given this input, we derive the usage changes from each code modification, as described in Section 3. Note that each code change results in a set of usage changes because the old and new versions of the program may instantiate multiple objects of the same type and use them differently in the code. 4.2 Filter Uninteresting Usage Changes The goal of this procedure is, given the large list of usage changes, to filter out the ones that are not relevant for deriving security rules. Uninteresting changes either do not affect crypto APIs, refactor crypto API calls, or introduce/delete code (as opposed to fixing an error). The input of the filtering procedure is a list of usage changes. Each usage change is a pair \((F^-, F^+)\) of paths where \(F^-\) contains the features that are removed from the old version and \(F^+\) contains the features added to the new version. We filter out a usage change \((F^-, F^+)\) if one of the following conditions hold: - **No-changes** \((f_{\text{same}})\): Both \(F^-\) and \(F^+\) are empty sets. This condition indicates that, with respect to our abstraction, the API usage is identical in both the old and the new version of the program. This means that there is no actual semantic change of API usage. This filter removes the majority of the changes. - **No-removals** \((f_{\text{add}})\): \(F^-\) is the empty set. This condition typically indicates that a usage of type \(t\) was added to the code. We do not use such changes since they simply say that the crypto API was introduced. - **No-additions** \((f_{\text{rem}})\): \(F^+\) is the empty set. This condition indicates that a usage of type \(t\) was removed from the code. - **No-duplicates** \((f_{\text{dup}})\): There is another usage change \((F'^-, F'^+)\) in the set such that \(F^- = F'^-\) and \(F^+ = F'^+\). This condition indicates that there is an identical usage change in the set of usage changes. To see the effect of each filter, we run them in turn and report the number of remaining usage changes at stages after each filter. We remark that \(f_{\text{add}}\) and \(f_{\text{rem}}\) together subsume \(f_{\text{same}}\), but we still consider \(f_{\text{same}}\) separately to report the number of changes that do not affect crypto APIs (shown later in Figure 6). 4.3 Cluster Usage Changes After we obtain the filtered semantic usage changes, we cluster them together to report insights about how developers fix crypto APIs. Clustering is useful, because multiple similar changes would indicate a common misconception or a common fix regarding the API. To perform this step, we use classic clustering algorithms based on the same features used in the previous steps. We now define a metric that captures the distance between a pair of usage changes. Our metric compares the features that the usage changes add and remove from the new and, respectively, the old version. We first define a measure of distance between two paths and then lift this measure to compare usage changes. **Distance Between Two Paths.** We use the following notation. Given a path \(p = l_0 \rightarrow \ldots \rightarrow l_n\) and two indices \(0 \leq i \leq j \leq n\), we write \(p[i]\) for \(l_i\), \(p[i : j]\) for the path \(l_i \rightarrow \ldots \rightarrow l_j\). Given two paths \(p_1\) and \(p_2\), we denote by \(\text{commonPrefix}(p_1, p_2)\) the length of the longest prefix of \(p_1\) and \(p_2\). That is, \(\text{commonPrefix}(p_1, p_2)\) returns the index \(j\) if and only if \(p_1[0 : j] = p_2[0 : j]\) and \(p_1[0 : j + 1] \neq p_2[0 : j + 1]\). Further, we write \(\text{lev}(l, l')\) to denote the Levenshtein distance [8] between the two node labels \(l\) and \(l'\), which captures the smallest number of modifications—insertions, deletions, and substitutions—required to change one label into the other. As units for modifications, we use characters for strings, while integers, bytes (which are abstracted to `const byte` and `T byte`), and method names are treated as single units. For example, it takes 1 modification (more precisely, 1 substitution) to change any method signature to a different one. We define the Levenshtein similarity ratio between two labels \(l\) and \(l'\) as: \[ \text{LSR}(l, l') = 1 - \frac{\text{lev}(l, l')}{\max(|l|, |l'|)} \] The distance between paths \(p_1\) and \(p_2\) is \(\text{pathDist}(p_1, p_2) = 0\) if \(p_1\) is identical to \(p_2\), and otherwise it is defined as: \[ \text{pathDist}(p_1, p_2) = 1 - \frac{j + \text{LSR}(p_1[j + 1], p_2[j + 1])}{\max(|p_1|, |p_2|)} \] where \(j = \text{commonPrefix}(p_1, p_2)\) is the length of the longest prefix of \(p_1\) and \(p_2\). In the numerator, we take the size of the common prefix and add the result of the Levenshtein similarity ratio between the remaining suffixes of \(p_1\) and \(p_2\). In the denominator, we have the largest length of both paths. **Distance Between Two Usage Changes.** We define the distance between two sets of paths \(F_1\) and \(F_2\), \(\text{pathsDist}(F_1, F_2)\), as the smallest distance that we obtain by first matching the paths in both sets and then summing their pair-wise path distance. Given two usage changes \(C_1 = (F^-_1, F^+_1)\) and \(C_2 = (F^-_2, F^+_2)\), we define the distance to be the average over the two distances between \(F^-_1\) and \(F^-_2\) and between \(F^+_1\) and \(F^+_2\): \[ \text{usageDist}(C_1, C_2) = \frac{\text{pathsDist}(F^-_1, F^-_2) + \text{pathsDist}(F^+_1, F^+_2)}{2} \] The distance metric \(\text{usageDist}(C_1, C_2)\) allows us to compare how semantically similar two usage changes are. **Hierarchical Clustering.** We use an agglomerative hierarchical clustering algorithm to group similar usage changes and structure them in a tree. Agglomerative clustering first introduces a leaf node in the tree for each usage change and then merges the two closest clusters according to the distance metric. As a distance metric we use the distance between two usage changes. The distance between clusters (also known as the linkage) is used to merge clusters of usage changes (higher in the tree). We use complete linkage where the distance between two clusters \(X\) and \(Y\) is given by: \[ \text{clusterDist}(X, Y) = \max_{C_1 \in X, C_2 \in Y} \text{usageDist}(C_1, C_2) \] 5 The DiffCode System In this section, we first present DiffCode, a system which implements the abstraction for usage changes described in Section 3 together with the filtering and clustering procedures presented in Section 4. In the following section, we show how DiffCode can be used to infer semantic usage changes to the Java Crypto API, based on thousands of code modifications we have collected from GitHub. 5.1 System Overview DiffCode is a new end-to-end system that implements the usage changes abstraction for Java APIs. Our implementation is in Python and spans roughly 7K lines of code. DiffCode takes as input a set of program pairs (old and new versions of a Java program) together with a target API class, and outputs a list of semantic usage changes of the target API class together with a clustering diagram of these usage changes. The main component of DiffCode is a lightweight AST-based program analyzer, described below. AST-based Program Analyzer. DiffCode uses a custom AST-based analyzer since many of the program versions provided as input are partial programs, such as library code without an explicit entry point and code snippets, that cannot be easily compiled. Further, we opted for an efficient and scalable analyzer that avoids heavy-weight static analysis, such as SPARK’s points-to analysis [19]. Our program analyzer takes as input a GitHub username, project name, and commit ID, which together identify a Java project version. Additionally, it takes the target API class for which we want to discover usage changes. Our analyzer first finds all allocation sites of the target class (which correspond to abstract objects). For each allocation site, located in some method \( m \), it finds the program’s entry methods that can lead to executions that call method \( m \). Note that there may be multiple such entry methods (i.e., other than main) if the code is a partial program or a library. For each entry method, the analyzer performs a forward execution of all relevant operations, such as object allocations and field accesses, to track the set of possible values that can be assigned to fields and variables. At each branching point (e.g., an if statement), the analyzer forks the execution into two executions and analyzes them independently. The result is a set of executions with derived abstract states at each method call. Each execution is used to derive a DAG as described in Section 3.4. Our analysis is inter-procedural, and currently does not support deep inheritance hierarchies and virtual functions. 6 Case Study: Java Crypto API We now describe a case study in which we use Java projects collected from GitHub to learn semantic changes of the Java Crypto API [26]. Extracting security fixes for the Java Crypto API is relevant because: (i) developers often misuse this API [12, 22, 24], and (ii) crypto modes become obsolete over time as security experts discover attacks, and thus new, up-to-date security rules for the Java Crypto API are needed (e.g., see [30]). More concretely, in this section we address the following research questions. First, we investigate the effectiveness of our abstraction and filters on distilling semantic code changes, and whether these are security fixes or buggy changes. Second, we report on our experience in clustering code changes and eliciting security rules. Finally, we investigate the relevance of the elicited security rules by applying these on Java projects collected from GitHub. 6.1 Experimental Setup Data Set. To obtain our training dataset, we scanned over 30,000 popular GitHub repositories. We selected the master branches of projects that use the Java Crypto API and have at least 30 commits. We duplicated projects in case the commit history has a common prefix. We remark that our selection method helps DiffCode ignore toy projects that are unlikely to contain interesting code changes. Indeed, our method selected some of the most starred Java projects excluding forks. For training, our selection led to 461 Java projects from 397 distinct users. We cloned these repositories and traversed the master branch commits of each repository. We consider 6 target API classes in our case study; see Figure 5. For each commit that changes at least one target class, we fetched the versions before and after the commit. Using this procedure, we collected a total of 11,551 code changes (i.e., pairs of programs) for all of the target classes. 6.2 Effectiveness of Abstraction and Filtering In Figure 6, we give the number of usage changes for each target API class and then show the effectiveness of our abstraction and filters. The second column gives the total number of usage changes, and the four columns to the right show the number of usage changes that remain after each filtering stage (see Section 4.2 for the list of stages). For example, the filter $f_{same}$, which removes changes that do not affect the target class, reduces the number of changes by more than an order of magnitude (e.g. 419 down from 15,829 for class Cipher). Filtering out changes that add or remove API calls of the target class, as well as removing duplicates further reduces the number of changes by another order of magnitude. At the end, the number of remaining changes makes the follow-up manual inspection feasible (e.g. only 75 changes for the Cipher class). Security Fixes vs Buggy Changes. Next, we address two questions: (i) whether the collected code changes represent security fixes or buggy changes and (ii) whether the filters keep these while removing the non-semantic code changes. To distinguish between security fixes and buggy changes, we encoded five security rules supported in CryptoLint [12], a security checker for crypto APIs. We denote these rules CL1-CL5. For instance, the first rule CL1 states: “Do not use ECB mode for encryption”. For each change, we check whether a rule triggers in the old version (before applying the change) and whether it triggers in the new version (after applying the change). Based on the result, we classify each code change as a: (i) security fix, if a rule triggers in the old version but not in the new version, (ii) buggy change, if a rule triggers in the new version but not in the old version, and (iii) non-semantic change, if the rule triggers identically in both versions. In Figure 7, we give the number of usage changes classified into security fixes, buggy changes, and non-semantic changes with respect to rules CL1-CL5. Note that the total number of changes varies across the rules as we count only changes that are applicable to the rule. For example, CL1 refers to the class Cipher, for which we have collected 15,829 usage changes in total. In the figure, we also show the number of usage changes removed by each filter. The data shows two important findings. First, most code changes have non-semantic meaning with respect to these rules. The filters, however, effectively eliminate the non-semantic changes, where the most effective filter is $f_{same}$ which can detect and eliminate code refactorings. Second, the semantic changes are not filtered out. Only 1 semantic change is eliminated by the $f_{dup}$ filter to remove a duplicate security fix (see CL1). The data in Figure 7 also shows that most of the changes are indeed security fixes, not buggy changes. Namely, over 80% of the code changes correspond to actual security fixes. 6.3 Clustering Security Fixes and Eliciting Rules Next, we report on our experience in eliciting rules from the security fixes. We inspected each fix, together with any other modifications that are similar (in terms of distance), on GitHub. In more detail, we inspected the concrete code patch, the commit message, and any additional comments that describe the commit. Clustering Security Fixes. We constructed a dendrogram for each target API class, using the hierarchical clustering algorithm described in Section 4.3. We depict a (partial) dendrogram derived for the Cipher API class in Figure 8. This dendrogram shows three usage changes. These changes show that developers are switching from the insecure ECB mode of <table> <thead> <tr> <th>Target API Class</th> <th>Usage Changes</th> <th>After filtering stage $f_{same}$ $f_{add}$ $f_{rem}$ $f_{dup}$</th> </tr> </thead> <tbody> <tr> <td>Cipher</td> <td>15829</td> <td>419 204 116 75</td> </tr> <tr> <td>IVParameterSpec</td> <td>4967</td> <td>58 24 12 11</td> </tr> <tr> <td>MessageDigest</td> <td>8277</td> <td>116 78 27 17</td> </tr> <tr> <td>SecretKeySpec</td> <td>15543</td> <td>226 120 55 45</td> </tr> <tr> <td>SecureRandom</td> <td>26008</td> <td>309 131 26 21</td> </tr> <tr> <td>PBEStrongEspec</td> <td>1549</td> <td>29 21 17 17</td> </tr> </tbody> </table> Figure 6. Usage changes per target API class after abstraction and filtering. The actual commits are available at http://diffcode.ethz.ch <table> <thead> <tr> <th>Rule</th> <th>Change Type</th> <th>Total Changes $f_{same}$ $f_{add}$ $f_{rem}$ $f_{dup}$</th> <th>Filtered changes $f_{same}$ $f_{add}$ $f_{rem}$ $f_{dup}$</th> <th>Remain changes</th> </tr> </thead> <tbody> <tr> <td>CL1</td> <td>fix</td> <td>8 0 0 1</td> <td>7</td> <td></td> </tr> <tr> <td></td> <td>bug</td> <td>1 0 0 0</td> <td>1</td> <td></td> </tr> <tr> <td></td> <td>none</td> <td>15820 215 88 40</td> <td>67</td> <td></td> </tr> <tr> <td>CL2</td> <td>fix</td> <td>1 0 0 0</td> <td>1</td> <td></td> </tr> <tr> <td></td> <td>bugs</td> <td>0 0 0 0</td> <td>0</td> <td></td> </tr> <tr> <td></td> <td>none</td> <td>4966 34 12 1</td> <td>10</td> <td></td> </tr> <tr> <td>CL3</td> <td>fix</td> <td>4 0 0 0</td> <td>4</td> <td></td> </tr> <tr> <td></td> <td>bug</td> <td>1 0 0 0</td> <td>1</td> <td></td> </tr> <tr> <td></td> <td>none</td> <td>15538 106 5 40</td> <td>40</td> <td></td> </tr> <tr> <td>CL4</td> <td>fix</td> <td>1 0 0 0</td> <td>1</td> <td></td> </tr> <tr> <td></td> <td>bug</td> <td>0 0 0 0</td> <td>0</td> <td></td> </tr> <tr> <td></td> <td>none</td> <td>1548 8 4 0</td> <td>16</td> <td></td> </tr> <tr> <td>CL5</td> <td>fix</td> <td>1 0 0 0</td> <td>1</td> <td></td> </tr> <tr> <td></td> <td>bug</td> <td>1 0 0 0</td> <td>1</td> <td></td> </tr> <tr> <td></td> <td>none</td> <td>1547 8 4 0</td> <td>15</td> <td></td> </tr> </tbody> </table> Figure 7. Filtered security fixes (fix), buggy changes (bug), and non-semantic changes (none) using the four filters. The right-most column shows the type of changes that remain after applying all filters. AES to the more secure CBC and GCM modes. We also give the concrete commits (with links in the references) that are identified with these usage changes. The top-most two usage changes are joined together to form a cluster, and then this cluster is joined with the third usage change. We found such information provided by the clustering helpful to navigate through and inspect the security fixes. Rules. We consider rules of the form \( t: \varphi \) where \( t \in \text{Types} \) is a type and \( \varphi \) is a logical formula interpreted over a set \( S \subseteq \mathcal{P}(\text{Methods} \times \text{AStates}) \) of method and abstract state pairs. For example, the logical formula \[ \varphi \equiv \exists (m, \sigma^a) \in S. m = \text{getInstance}(X) \land \Delta^a(X) = \text{SHA-1} \] is satisfied, denoted \( S \models \varphi \), if the set \( S \) contains a pair \((m, \sigma^a)\) such that \( m = \text{getInstance}(X) \) and \( \Delta^a(X) = \text{SHA-1} \). A rule \( t: \varphi \) matches an abstract object \( a \) of a given program with abstract uses \( \text{AUses} \) if \( \text{type}(a) = t \) and \( \text{AUses}(a) \models \varphi \). We say that a rule \( t: \varphi \) is applicable to an abstract object \( a \) if \( \text{type}(a) = t \). For brevity, we write \[ \text{getInstance}(X) \land X = \text{SHA-1} \] as a shorthand for the logical formula given above. We sometimes conjoin multiple rules to express more complex ones. For example, the composite rule \((t_1: \varphi_1) \land (t_2: \varphi_2)\) matches a program \( P \) if both \( t_1: \varphi_1 \) and \( t_2: \varphi_2 \) match some, possibly different, abstract objects in \( P \). Elicited Rules. Using DiffCode on our data set of Java projects, we managed to discover a number of different security rules, which we also validated by checking security papers, blogs, and bulletins. In Figure 9 we list all security rules. Out of these, \( R2, R7, R9, R10, R11, R12 \) are known and have been documented. For details on these we refer the reader to [12]. We next describe some of the other rules. Rule \( R1 \) states that SHA-256 should be used instead of SHA-1. Indeed, security researchers have recently announced the first practical technique for generating a collision for SHA-1 [30], and they have warned developers that it is recommended to switch to the more secure SHA-256. Rule \( R3 \) states that the preferred mode for using the SecureRandom class is SHA-1PRNG, which is initially seeded via a combination of system attributes and the java.security entropy gathering device. Later, we have found the motivation to be described in [2]. Rule \( R4 \) states that SecureRandom.getInstanceStrong() should be avoided on server-side code running on Solaris/Linux/MacOS where availability is important, as documented in [28]. This is because SecureRandom.getInstanceStrong() returns the NativePRNGBlocking mode of SecureRandom, which may block and thus developers suggest to avoid it [28]. Rule \( R5 \) states that the BouncyCastle provider should be used instead of the default Java Crypto API provider because BouncyCastle does not have the 128 bit secret key restriction [3]. Rule \( R6 \) detects that SecureRandom is vulnerable on Android SDK versions 16, 17, and 18 if the Linux PRNG module is not installed [1]. The implementation of the check \( \text{HAS_LPRNG} \) is described in [1]. Rule \( R8 \) states that Cipher should not be used in DES mode because this mode is no longer considered secure [23]. Rule \( R13 \) states that developers should add integrity after having exchanged a symmetric key, which is frequently done with an asynchronous cipher such as RSA. A common fix is to switch to the AES cipher with or in combination with HMAC [6]. Note that, to match vulnerable projects, rule \( R13 \) is expressed as a composite rule that refers to three distinct objects – two objects of type Cipher and one object of type Mac. We remark that the rule matches any projects that have the two Cipher objects but lacks the required Mac object. In particular, the rule does not explicitly define in which order these objects are declared and used. Overall, while some of these rules may be known to some security researchers, with the DiffCode system we were able to systematically derive all of them. Further, DiffCode enabled us to create a single checker for all of these rules. On Automating Rule Elicitation. We remark that DiffCode can also automatically suggest a rule by constructing a predicate that matches any use that has the features present in the old versions and does not have those added to the new versions. Note that this predicate would match any usage that is not fixed according to the code changes. As a simple example, consider the removed and added features depicted in Figure 2(d). The generated security rule would be \[ \text{Cipher}: (\text{getInstance}(X) \wedge X = \text{AES}) \\ \wedge (\text{getInstance}(Y) \Rightarrow Y \neq \text{AES/CBC/PKCS5Padding}) \\ \wedge (\text{<init>}(X', \ldots, Y') \Rightarrow Y' \neq \text{IVParameterSpec}) \] This rule captures that AES Ciphers which use the default AES mode, and neither use the AES/CBC/PKCS5Padding mode nor pass an object of type IVParameterSpec to the constructor, must be fixed. While using the above method one can completely automate the generation of rules, identifying whether a rule is security-relevant in a purely automated manner is challenging and goes beyond the scope of this work. 6.4 Relevance of The Elicited Security Rules To evaluate the relevance of the discovered security rules, we developed a security checker, called CryptoChecker, that supports all rules in Figure 9. We ran CryptoChecker on 519 Java projects. These include all 463 Java projects we used for training as well as additional 56 projects which we downloaded after eliciting the rules. We report the number of discovered rule violations in Figure 10. For each security rule, we give (i) the total number of projects that have at least one usage applicable to the security rules, and (ii) the number of projects that have at least one insecure usage according to our rules. For instance, rule R1 is only applicable to usages of the API class MessageDigest as it stipulates how classes of type MessageDigest should be instantiated. The number 257 in the first row thus indicates that there are 257 projects (49.5% of the 519 projects) that have at least one usage of type MessageDigest. The matching column indicates that 89 out of the 257 projects (34.6%) have at least one usage that matches rule R1. <table> <thead> <tr> <th>ID</th> <th>Description</th> <th>Rule</th> </tr> </thead> <tbody> <tr> <td>R1</td> <td>Use SHA-256 instead of SHA-1 [30]</td> <td>MessageDigest: getInstance(X) \wedge X = SHA-1</td> </tr> <tr> <td>R2</td> <td>Do not use password-based encryption with iterations count less than 1000 [7]</td> <td>PBESKeySpec: &lt;init&gt;(_, _, X_) \wedge X &lt; 1000</td> </tr> <tr> <td>R3</td> <td>SecureRandom should be used with SHA-1PRNG [2]</td> <td>SecureRandom: &lt;init&gt;(X) \wedge X ≠ SHA-1PRNG</td> </tr> <tr> <td>R4</td> <td>SecureRandom with getInstanceStrong should be avoided</td> <td>SecureRandom: getInstanceStrong</td> </tr> <tr> <td>R5</td> <td>Use the BouncyCastle provider for Cipher</td> <td>Cipher: getInstance(X) \wedge X ≠ BC</td> </tr> <tr> <td>R6</td> <td>The underlying PRNG is vulnerable on Android v16-18 [17]</td> <td>SecureRandom: &lt;init&gt;(X) \wedge ~PRNG \wedge MIN_SDK_VERSION ≥ 16</td> </tr> <tr> <td>R7</td> <td>Do not use Cipher in AES/ECB mode [9]</td> <td>Cipher: getInstance(X) \wedge (X = AES \vee X = AES/ECB)</td> </tr> <tr> <td>R8</td> <td>Do not use Cipher with DES mode [23]</td> <td>Cipher: getInstance(X) \wedge X = DES</td> </tr> <tr> <td>R9</td> <td>IvParameterSpec should not be initialized with a static byte array [9]</td> <td>IvParameterSpec: &lt;init&gt;(X) \wedge X ≠ byte[]</td> </tr> <tr> <td>R10</td> <td>SecretKeySpec should not be static</td> <td>SecretKeySpec: &lt;init&gt;(X) \wedge X ≠ byte[]</td> </tr> <tr> <td>R11</td> <td>Do not use password-based encryption with static salt</td> <td>PBESKeySpec: &lt;init&gt;(_, _, X_) \wedge X ≠ byte[]</td> </tr> <tr> <td>R12</td> <td>Do not use SecureRandom static seed</td> <td>SecureRandom: setSeed(X) \wedge X ≠ byte[]</td> </tr> <tr> <td>R13</td> <td>Missing integrity check after symmetric key exchange [6]</td> <td>(Cipher: getInstance(X) \wedge startsWith(X, AES/CBC)) \wedge (Cipher: getInstance(Y) \wedge Y = RSA) \wedge ~ (Mac: getInstance(Z) \wedge startsWith(Z, Hmac))</td> </tr> </tbody> </table> Figure 9. Security rules derived from security fixes applied to the Java Crypto API. <table> <thead> <tr> <th>Rule</th> <th>Applicable (% of total)</th> <th>Matching (% of appl.)</th> </tr> </thead> <tbody> <tr> <td>R1</td> <td>257 (49.5%)</td> <td>89 (34.6%)</td> </tr> <tr> <td>R2</td> <td>64 (12.3%)</td> <td>15 (23.4%)</td> </tr> <tr> <td>R3</td> <td>305 (58.8%)</td> <td>289 (94.8%)</td> </tr> <tr> <td>R4</td> <td>305 (58.8%)</td> <td>3 (1%)</td> </tr> <tr> <td>R5</td> <td>211 (40.7%)</td> <td>206 (97.6%)</td> </tr> <tr> <td>R6</td> <td>59 (11.4%)</td> <td>48 (81.4%)</td> </tr> <tr> <td>R7</td> <td>211 (40.7%)</td> <td>60 (28.4%)</td> </tr> <tr> <td>R8</td> <td>211 (40.7%)</td> <td>20 (9.5%)</td> </tr> <tr> <td>R9</td> <td>124 (23.9%)</td> <td>7 (5.6%)</td> </tr> <tr> <td>R10</td> <td>232 (44.7%)</td> <td>12 (5.2%)</td> </tr> <tr> <td>R11</td> <td>64 (12.3%)</td> <td>7 (11%)</td> </tr> <tr> <td>R12</td> <td>305 (58.8%)</td> <td>1 (0.3%)</td> </tr> <tr> <td>R13</td> <td>8 (1.5%)</td> <td>4 (50%)</td> </tr> </tbody> </table> Figure 10. Rule violations for the analyzed projects. Overall, the data in Figure 10 confirms recent findings that developers struggle to use the Java Crypto API correctly [24]. In > 57% of the projects CryptoChecker discovers at least one security rule that is matched. We remark that CryptoLint [12], a similar checker to CryptoChecker, can be used to check some (but not all) of CryptoChecker’s rules. However, since CryptoLint is not publicly available, we were unable to compare our results. Finally, we used some of the reports of CryptoChecker to report 15 security violations, 3 of which were confirmed. The reported issues are listed at http://diffcode.ethz.ch. 7 Related Work In this section, we survey several recent works that are most closely related to ours. Misuse of Crypto APIs. The authors of [22] describe a set of security guidelines related to the use crypto APIs, e.g. that password-based encryption must be used with a random seed, encryption keys must not be hard-coded, and so forth. Applications that do not follow these guidelines are considered vulnerable. They examine 49 Android applications and show that 87.8% of them suffer from at least one vulnerability. The authors of [11] present a manual analysis of different crypto APIs (OpenSSL, Java, PyCrypto, and others) and discuss seven problems related to API misuses, such as reuse of initialization vectors, lack of code samples in the API documentation, safe API defaults, and so forth. The scope of this report is on crypto APIs, not on the applications that use these APIs. In [24], the authors report on a survey to discover why developers often fail to use crypto APIs correctly. They also present suggestions to API developers that may mitigate the problem of API misuse. Detecting Misuse of Crypto APIs. Several works consider the problem of automatically detecting misuse of crypto API in Java applications. OWASP provides a list of static analysis tools [5] that target security issues, however these tools have a very limited set of crypto checks (e.g. see FindSecBugs [4]). CryptoLint [12] is a specialized system that checks Android applications for crypto API misuses with six fixed rules, such as “Do not use ECB mode for encryption”. To check these properties, CryptoLint statically computes a program slice immediately before invoking a crypto API and checks properties on the arguments passed to that API. They evaluate >11k Android applications and show that 88% of the applications violate at least one rule. Compared to CryptoLint, CryptoChecker supports a more comprehensive set of security rules. The CMA analyzer, presented in [29] is very similar to [12] and also targets finding misuses of crypto APIs in Android. Compared to [12], CMA considers more security rules. AmanDroid [31] is a system for precise static analysis of Android applications. AmanDroid computes a dependency graph that captures control- and data-flow dependencies for all objects. The security analysis is then phrased as graph queries over the dependency graph. For example, we can check an application for absence of data leaks by checking that there is no path from a source to a sink. AmanDroid can be also used to check whether applications misuse crypto APIs by encoding the rules of [11, 13, 24] in terms of graph queries. In [16], the authors explain the concepts behind AmanDroid in a more general, cleaner setting. Misuse of crypto APIs is not an Android or Java-specific problem: the evaluation of a dynamic analysis tool for iOS [20] found that over 65% of the tested iOS applications suffer from vulnerabilities due to API misuse. Repairing Misuse of Crypto APIs. The CDRep system presented in [22] can be used to detect and repair misuses of Android’s crypto API. For the detection step, CDRep performs an analysis similar to the one presented in [12]. Given an Android APK, CDRep detects the instructions responsible for a particular misuse of the crypto API. The responsible instructions include the call to a particular crypto API, called the indicator instruction, and instructions on which the indicator instruction depends. For example, one security rule states that applications must not use encryption in ECB mode. The responsible instructions that violate this rule would include: the call to the encrypt method, the instruction that constructs the encryption object, and the instruction that initializes the encryption-scheme string (e.g. “AES/ECB”) passed to the encryption constructor. After identifying the instructions responsible for a given cryptographic misuse, CDRep uses manually pre-defined patch templates to suggest candidate repairs. Learning from Code. Several prior works check for API errors by first learning a likely specification of a program and its API calls [13, 18]. The recent APISan system [32] automatically infers the correct usage of APIs by observing the contexts of the calls from multiple projects. For example, APISan can learn that the return value of a method is typically checked for null and then it can report outliers to this learned specification. This allows APISan to check large codebases in a precise and scalable way for given predefined types of issues such as null dereferences and overflows. In contrast to these works, we focus on crypto APIs for which (i) we do not know the kind of issues that may be present and (ii) the majority of the projects misuse the APIs. The work of Long and Rinard [21] considers the reverse task of ours and learns from correct code to guide automatic generation of bug fixes. Learning from Code Changes. Several works propose to learn from previous code changes to help developers complete a new change. A system by Zimmermann et al. [33] warns developers if a newly developed change only does a subset of what other changes did. A more recent work by Nguyen et al. [25] developed a code completion engine that precisely predicts code for new changes based on code in previous code changes. In contrast to our approach, however, these works cannot find issues in existing code and make predictions only for code modifications. 8 Conclusion We presented a new data driven approach for extracting semantically meaningful API usage changes from concrete code fixes collected by processing public repositories. The approach is based on an abstraction for code changes which captures the implications of a change to objects of the Crypto API. These implications are represented as semantic features that are removed from the old and added to the new version of the program. Our abstraction enables us to distill relevant semantic changes using filters that eliminate purely syntactic modifications. As a final step we (hierarchically) cluster the remaining, semantically meaningful security fixes, enabling us to derive new security rules. We also presented DiffCode, a system that implements our data driven approach. We applied DiffCode to Java code changes collected from GitHub and extracted security fixes for the Java Crypto API. Based on these results, we identified 13 relevant security rules which we implemented in a new security checker called CRYPTOChecker. We evaluated CRYPTOChecker on a number of public Java projects, discovering misuses of the Java Crypto API in > 57% of the analyzed projects. The data driven approach presented in this work allowed us to systematically derive relevant security rules some of which are missing from existing checkers. We believe that this work is an important step towards solving the general problem of automatically deriving API misuse checks. References [23] Dhruv Mohindra. 2016. Do not use insecure or weak cryptographic algorithms. https://www.securecoding.cert.org/confluence/display/java/MSC61-J-Do+not+use+insecure+or+weak+cryptographic+algorithms
{"Source-Url": "https://www.sri.inf.ethz.ch/papers/diffcode-pldi2018.pdf", "len_cl100k_base": 16126, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 60264, "total-output-tokens": 19379, "length": "2e13", "weborganizer": {"__label__adult": 0.0003662109375, "__label__art_design": 0.0002460479736328125, "__label__crime_law": 0.0005183219909667969, "__label__education_jobs": 0.0006475448608398438, "__label__entertainment": 4.565715789794922e-05, "__label__fashion_beauty": 0.00013136863708496094, "__label__finance_business": 0.00018703937530517575, "__label__food_dining": 0.00023353099822998047, "__label__games": 0.0006504058837890625, "__label__hardware": 0.0007505416870117188, "__label__health": 0.0003345012664794922, "__label__history": 0.00017178058624267578, "__label__home_hobbies": 7.975101470947266e-05, "__label__industrial": 0.0002760887145996094, "__label__literature": 0.00020802021026611328, "__label__politics": 0.00021660327911376953, "__label__religion": 0.00034809112548828125, "__label__science_tech": 0.00955963134765625, "__label__social_life": 7.402896881103516e-05, "__label__software": 0.00556182861328125, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0002582073211669922, "__label__transportation": 0.00035452842712402344, "__label__travel": 0.00015091896057128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73859, 0.03177]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73859, 0.37394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73859, 0.85087]], "google_gemma-3-12b-it_contains_pii": [[0, 3880, false], [3880, 7283, null], [7283, 13030, null], [13030, 16487, null], [16487, 20220, null], [20220, 26594, null], [26594, 30340, null], [30340, 36328, null], [36328, 40789, null], [40789, 47866, null], [47866, 52160, null], [52160, 58501, null], [58501, 64180, null], [64180, 70830, null], [70830, 73859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3880, true], [3880, 7283, null], [7283, 13030, null], [13030, 16487, null], [16487, 20220, null], [20220, 26594, null], [26594, 30340, null], [30340, 36328, null], [36328, 40789, null], [40789, 47866, null], [47866, 52160, null], [52160, 58501, null], [58501, 64180, null], [64180, 70830, null], [70830, 73859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73859, null]], "pdf_page_numbers": [[0, 3880, 1], [3880, 7283, 2], [7283, 13030, 3], [13030, 16487, 4], [16487, 20220, 5], [20220, 26594, 6], [26594, 30340, 7], [30340, 36328, 8], [36328, 40789, 9], [40789, 47866, 10], [47866, 52160, 11], [52160, 58501, 12], [58501, 64180, 13], [64180, 70830, 14], [70830, 73859, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73859, 0.15942]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
9c16eabbd41ed9f68003ca75bdb29692f2ef4d4f
ABSTRACT Modern software systems are increasingly built out of services that are developed, deployed, and operated by independent organizations, which expose them for use by potential clients. Services may be directly invoked by clients. They may also be composed by service integrators, who in turn expose the composite artifact as a new service. Continuous change is typical of this world. Providers may change services and the deployment infrastructure to meet continuously changing requirements and be more competitive. Clients may change their operational profiles. Changes have a severe impact on the quality of services. In this paper we address the problem of identifying changes concerning the non-functional behavior of software services managed by external organizations, and consequently considered as black-box artifacts. We define the concept of change-point and provide a statistical technique aimed at identifying it, given an execution trace produced by client invocations. Change-point detection is key to reasoning about changes, diagnosing their cause, and suitably reacting to their occurrence. Categories and Subject Descriptors C.4 [Computer Systems Organization]: Performance of Systems—Modeling techniques, Performance attributes; D.2.4 [Software Engineering]: Software/Program Verification—Reliability General Terms Performance, Reliability 1. INTRODUCTION 1.1 The Context Software design and development radically changed in the last decade. Software systems were traditionally designed to operate in a completely known and immutable environment. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. FSE-18, November 7–11, 2010, Santa Fe, New Mexico, USA. Copyright 2010 ACM 978-1-60558-791-2/10/11 ...$10.00. and control, because of the plethora of different and intertwined phenomena that may occur in the real world. The approach we describe here aims at providing support to reasoning about changes and diagnosing their causes. 1.2 The Problem and Its Motivations Our contribution focuses on the problem of change-point detection for black-box services. We observe services as black boxes, that is, without accessing their internals, but instead monitoring and analyzing the data that flow through their interface. Service internals are in fact normally inaccessible to clients, and even service providers—who do have access to them—may find it useful to be able to analyze services without accessing the complex internal details of their implementation. Observations are data streams, collected by probes at the service interface, which describe the behavior of the service concerning a specific quality of interest and from a specific viewpoint. If performance is the quality of interest, observations may be sequences of timestamped invocations with the associated response time. If instead reliability is the quality of interest, they may be sequences of timestamped invocations with an associated result indicating whether the invocation was served successfully or not. Furthermore, observations may express the viewpoint of (and be collected on behalf of) the client, who monitors the calls issued to external services (e.g., see [2] for a monitoring approach). This is called client-side analysis. Alternatively, in server-side analysis, data are collected by service providers, who monitor the QoS of the exported service, which may be used concurrently by several clients. More precisely, assume we are interested in reliability, and suppose it is measured as a service’s failure rate. Given a set of observations in a time interval, our change-point detection method can detect (1) if a relevant change took place, (2) the point in time when the change occurred, and (3) the new value of the failure rate after the change occurred. For example, consider Figure 1. Suppose that a change-point is detected at the 12th observation of the stream. We can deduce that a relevant change in the system occurred at a time \( t \) which is greater than the timestamp associated with the 11th observation and less than the timestamp associated with the 12th observation. More precisely, assume we are interested in reliability, and suppose it is measured as a service’s failure rate. Given a set of observations in a time interval, our change-point detection method can detect (1) if a relevant change took place, (2) the point in time when the change occurred, and (3) the new value of the failure rate after the change occurred. For example, consider Figure 1. Suppose that a change-point is detected at the 12th observation of the stream. We can deduce that a relevant change in the system occurred at a time \( t \) which is greater than the timestamp associated with the 11th observation and less than the timestamp associated with the 12th observation. ![Figure 1: Change-point example.](image) Change-point detection may be a useful tool for the client. If the detected change implies a SLA violation, the exact knowledge of when the change occurred may be used as a proof of evidence in a dispute with the provider. Detected change(s) may also suggest re-binding to a different provider, who may provide a better service [17]. Change-point detection may also be a useful tool for the provider. The provider, in fact, may trace back from the point in time when an unexpected change in quality occurred to the actions that may have caused that change. By reasoning about the change-point, one might point to the installation of a new library component, which substituted a previously installed component and may be responsible for the unexpected change. An important special case is when the client is a service integrator, who offers an added value service to its clients by aggregating existing third-party services. In this case, the detected change-points and (following the previous example) the updated value of reliability of third-party services may be used to compute an updated value of reliability for the composite service, to assess whether this jeopardizes its ability to satisfy the contracts with clients. This kind of analysis may be performed as discussed in [9]. This paper focuses on change-point detection, i.e., on identifying when a potential problem occurred. Understanding its originating cause and planning for reactions which might behave as remedies are outside its scope, and will be the goal of further investigations. Our main contributions are the definition of the concept of change-point, the illustration of its relevance in service-oriented systems, and the development of a statistical framework supporting change-point detection. We also provide an initial assessment of its effectiveness and limitations by numerical simulations. The remainder of the paper is organized as follows: Section 2 illustrates the case study in which we apply our approach. Section 3 provides a detailed description of the proposed approach. Section 4 illustrates the simulations which validate the proposed approach. Finally, Section 5 discusses related work and Section 6 concludes the paper describing the current limitations of our approach and future work. 2. A RUNNING EXAMPLE Let us consider a service for image editing, called ImageModder, which provides a set of APIs to manipulate, transform, and save images in different formats. ImageModder represents a real-world case study. It supports a wide range of applications, such as: (1) healthcare applications—which usually require sophisticated image enhancement techniques for diagnosis purposes—or (2) web applications that offer on-line photo editing. The actual implementation of ImageModder resides on a server and interaction with clients is performed through a publicly available Web service or through a downloadable software component in charge of masking the remote interaction with the server. The API provided by ImageModder comprises the primitives listed in Table 1. A complete documentation of the API is beyond the scope of this paper. Due to the lack of space we focus on the description of the essential concepts needed to understand the basic features of ImageModder. Clients must first authenticate. If authentication succeeds, clients can upload and modify images in an virtual file system organized in folders. Initially, clients must: (1) set their working folder (potentially creating a new one) and (2) set their current image (potentially importing a new one). Afterwards, Imagemodder is ready to work. Each operation invoked by clients directly affects the working folder or the current image. Image transformations are performed through the application of filters. A filter is a transforma- --- 1ImageModder was inspired by the API provided by an existing on-line application: Picnik [24] currently adopted by several clients (e.g., Flickr [10]) to support photo editing of user generated content. tion that modifies the current image (e.g., histogram equalization, gamma correction, etc.). Finally, users can save or export their modified image. ImageModder offers to clients up-to-date image processing techniques; the library of existing filters is constantly maintained and updated. Application developers who need image editing features may exploit ImageModder instead of developing in-house equivalent techniques. Relying on a third-party service, however, may cause the problems we described in Section 1. The behavior of a service may change over time, possibly invalidating the assumptions made when that was chosen. We show how change-point detection can be used by a client to identify changes in the non-functional qualities of ImageModder and estimate the new values. The updated estimates may be used by ImageModder clients to decide if the new behavior is compliant with their requirements or if it is necessary to re-bind their system to another service provider or promote an in-house development of these features. ### Table 1: ImageModder API. <table> <thead> <tr> <th>Type</th> <th>Name</th> <th>Description</th> <th>Parameters</th> </tr> </thead> <tbody> <tr> <td>General</td> <td>login</td> <td>Authenticates the user and opens a session</td> <td>Username and Password</td> </tr> <tr> <td></td> <td>logout</td> <td>Closes the current session</td> <td>N/A</td> </tr> <tr> <td>Folder</td> <td>createFolder</td> <td>Creates a new folder in the current one</td> <td>The name of the folder</td> </tr> <tr> <td>Folder</td> <td>setCurrentFolder</td> <td>Sets the current folder</td> <td>The absolute path of the folder</td> </tr> <tr> <td>Folder</td> <td>renameCurrentFolder</td> <td>Renames the current folder</td> <td>The new name of the folder</td> </tr> <tr> <td>Image</td> <td>import</td> <td>Loads a file in the current folder</td> <td>The file to be uploaded</td> </tr> <tr> <td>Image</td> <td>export</td> <td>Returns the current image</td> <td>N/A</td> </tr> <tr> <td>Image</td> <td>setCurrentImage</td> <td>Selects the current image</td> <td>The absolute path of the image</td> </tr> <tr> <td>Image</td> <td>renameCurrentImage</td> <td>Renames the current image</td> <td>The new name of the image</td> </tr> <tr> <td>Image</td> <td>filter</td> <td>Applies the filter to the current image</td> <td>The filter name</td> </tr> <tr> <td>Image</td> <td>delete</td> <td>Deletes the current image</td> <td>N/A</td> </tr> <tr> <td>Image</td> <td>save</td> <td>Saves the current image</td> <td>N/A</td> </tr> </tbody> </table> and if the following Markov property holds: \[ P(X_{n+1} = s' | X_n = s, X_1, \ldots, X_{n-1}) = P(X_{n+1} = s' | X_n = s) = m_{s,s'}, \] where - \( S \) is a finite set of states: \( S = \{1, \ldots, k\} \); - \( s_{init} \in S \) is the initial state; - \( M : S \times S \rightarrow [0, 1] \) is a transition probability matrix; its element \( m_{s,s'} \) represents the probability of passing from state \( s \) to state \( s' \) and \( \sum_{s' \in S} m_{s,s'} = 1 \). Markov property (1) means that the probability of going from state \( s \) to \( s' \) is independent of the past transitions. In our approach, DTMCs are used to model both probability of failure and discrete distributions of response time. They are specified by the set of states \( S \) and the transition probability matrix \( M \) which contains probabilities that represent, for example, the probability of failure associated with an operation provided by a service. Equally, DTMCs are represented by their diagrammatic notation or by transition probabilities matrices. We will use the former notation for readability purposes and the latter for the formal description of the statistical technique for change-point detection in Section 3.3. ### 3. CHANGE-POINT DETECTION As we said in Section 1, our approach focuses on identifying changes in non-functional behavior of black-box services given run-time data extracted by running instances of systems that exploit such services. We consider (1) reliability, expressed as probability of failure and (2) performance, expressed as response-time distribution. The proposed approach exploits models to represent the non-functional behavior of the component under analysis. In particular, we adopt Discrete Time Markov Chains (DTMCs). Section 3.1 introduces DTMCs and Section 3.2 describes how these models are used to detect change points of software services. #### 3.1 Discrete Time Markov Chains DTMCs are stochastic processes with the Markov property. They are defined as state-transition systems augmented with probabilities. States represent possible configurations of the system. Transitions among states occur at discrete time and have an associated probability. Formally, a sequence of random variables \( X_0, X_1, \ldots \) is a DTMC with tuple \((S, s_{init}, M)\) if: \[ P(X_0 = s_{init}) = 1 \] Consider a service for which we wish to perform client-side change-point analysis. Let us first focus on reliability. As a first step, we need to build a DTMC model of the service. The structure of such model is generated by analyzing: (1) the service’s specification (API, interaction protocol) and the associated SLA, (2) expected usage profile of service invocations (i.e., how the client expects or predicts to use the service). Typically, the DTMC comprises one state for every operation provided by the service plus a set of auxiliary states representing potential failure states. Transitions describe the possible sequences of service invocations (and possible failures). They may be annotated with probabilities representing: (1) probabilities of success or failure, derived from the SLA or (2) the usage profile. In the case of performance, for each operation exported by the API, we build an additional DTMC representing the expected discretized distribution of its response time. Transition probabilities, as in the previous model, can be derived from the SLA. Server-side analysis is similar. The main differences in the latter case may be in the level of detail of the models, which may be more fine-grain, and in the usage profiles, which account for all users accessing the service. The reliability model derived for ImageModder by analyzing its documentation\(^2\), API, and by predicting a possible usage profile, is shown in Figure 2(a). This model contains: (1) a state for each operation provided by the ImageModder component, (2) several auxiliary states, such as the ready state, which represent the state of the service after the initialization in which the user sets its current image and working directory, and (3) several additional states—highlighted in grey—which represent potential failures associated with specific operations. In this example, we consider only failures associated with operations having an explicit access to repositories (e.g., import, save, etc.). We also consider a failure associated with the login operation to model potential denial of service of the ImageModder component (e.g., in case of too many concurrent requests). The DTMC in Figure 2(a) takes into account the usage profile predicted by the client; for instance, the probability of creating a new folder (i.e., the value labeling the transition from state 0 to state 3) instead of selecting a pre-existing folder after the initial login (i.e., value labeling the transition from state 0 to state 2). Other probabilities are instead extracted from the SLA subscribed by the ImageModder provider, such as the probability of failure associated with the login state. This probability of failure represents the availability of the service, i.e., an information typically declared by on-line service providers. Performance models can be derived for ImageModder in a similar manner. Figure 2(b) shows the DTMC modeling response time of the save operation (normalized by the size of the saved image). The number of discretizing intervals is equal to the number of states reachable from the initial state. The probability that the response time is in a given time interval is associated with the transition from the initial state to the state representing that interval. Notice that the number of intervals (and hence states) affects the accuracy of the model. However, the approach does not directly depend on the size of the model, but only on the number of transitions under scrutiny. Our change-point detection method, as discussed later in Section 3.3 in detail, uses execution traces collected by a monitor to figure out whether the actual behavior of a service complies with a given model and, if not, when the change occurred and which are the new values that characterize the model after the change occurred. For example, should the actual performance model of the save operation change during the utilization of the service with respect to the model in Figure 2(b), the change-point detection method would provide: (1) a refined and accurate version of the model representing the service before the change, (2) a different model representing the service after the change, and (3) an estimate for the change-point \(\tau\) in the trace. The initial models fed to the change-point detection method may reflect our limited and uncertain a-priori knowledge of the system. Both usage profile and probabilities of failure are in fact hard to predict accurately and SLAs may be inaccurate. As we will see, however, our method is robust: it works correctly also in the case where the initial model is inaccurate, since it can derive accurate estimates from actual observations. 3.3 A Bayesian technique for Change-Point Detection This section illustrates the mathematical background of our method, which exploits a Bayesian statistical technique for change-point detection and involves a Monte Carlo integration method called Gibbs sampling. We assume that the interaction between a client and the service is captured by an execution trace. An execution trace \(g\) is a sequence of triplets \((r,s,t)\), where \(r\) is the unique identifier of the operation, \(s\) is the time when the invocation is issued, \(t\) is the time when the invocation is completed (if the invocation succeeds), or the special value FAIL (if the invocation fails). From an execution trace, we can derive a reliability trace, used to detect change-points in reliability, and performance traces, used to detect change-points in performance. For simplicity, and space reasons, the way to derive such traces is only informally described hereafter through examples. A reliability trace is obtained by scanning the execution trace from left to right and mapping it into sequences of paths on the reliability model, each of which represents an interaction with the service. For example, the sequence of paths \((0, 3, 4, 0, 2, 5, 7, 11, 14, 11, 10, 11, 19)\) on the DTMC of Figure 2(a) represents the sequence of two interactions with the service. The former tries to create a folder and then fails. The latter sets the current folder, imports an image, filters, saves, and logs out. Similarly, a performance trace for an operation \(Op\) is built by projecting the execution trace onto a sequence of non-failing \(Op\) invocations. The difference between the response time and the invocation time for each \(Op\) call is then mapped onto the corresponding transition on \(Op\)'s performance model. For instance in the case of the save operation the sequence of paths \((0, 1, 0, 1, 0, 2, 0, 3, 0, 1)\) represents the performance trace projected from an execution trace that contains subsequent calls of the save operation of duration: \((0.1, 0.05, 0.3, 0.7, 0.07)\). Summing up, each trace is a stream of sequences—representing paths in their corresponding DTMC—into which we search for change-points. From a mathematical viewpoint the change-point detection method works as follows. Given initial distributions for: (1) two random matrices \(A\) and \(B\), which model the service before and after the change, respectively and (2) a random point \(\tau\) in the trace that identifies the change point, our approach generates updated estimates for \(A\), \(B\), and \(\tau\) exploiting the information provided by the trace \(z\). The change-point detection method requires the user to provide prior knowledge on \(A\), \(B\), and \(\tau\). Matrix \(A\) corresponds to the DTMC that models the initially expected behavior of the service (matrix \(A\) in Figure 2(a) if reliability is our focus). For \(\tau\), we may assume an arbitrary point of the trace. Finally, for \(B\) one can choose arbitrary values or, for example, a “pessimistic” version of \(A\) (e.g., where all failure probabilities are overestimated). As already mentioned, \(^2\)Due to the lack of space we cannot provide a complete reference documentation of the service. All the crucial information information is reported in Section 2. \(^3\)The procedure applies identically to performance models (and performance traces) and to reliability models (and reliability traces) since they are both represented in the same mathematical framework. our method works correctly no matter which initial values we chose for the statistical procedure of change-point analysis. Such values are just initial seeds that do not affect its correctness, but just the size of the trace that must be analyzed to make the correct prediction. We will give evidence of this property later in Section 4. The method assumes the structure of the model to be immutable and focuses on its parameters, which may change. Thus the cardinalities of A and B are known, equal, and correspond to the structure of the model provided initially by the system designer. In the case of ImageModder, both A and B are 22x22 matrices for the reliability model and 5x5 matrices for the performance model of the save operation (see Figure 2). Following a Bayesian approach, we consider A, B, τ as random elements characterized by a prior distribution and we proceed to updating them by computing the joint posterior distribution of A, B and τ given data x: P(A, B, τ | x), and the marginal posterior laws: P(A | x), P(B | x), and P(τ | x). In particular, by exploiting the marginal posterior law P(τ | x), we are able to decide, for a given trace x, if a change-point has taken place in the modeled system and when. For example, if the trace length is n and P(τ = n | x) > γ100%, then we are γ–confident that no change-point is present in our trace and the modeled system is still regulated by matrix A. In this setting, τ = n (and τ = 1) mean no change. By computing the mean value of the posterior distribution P(τ | x) we obtain an estimate of the change-point. In particular, by estimating the change-point with the posterior mean we minimize the quadratic Bayesian risk [3]. As for the prior law of A, B, τ, we assume them to be statistically independent. Moreover, we choose independent Dirichlet distributions for each row in A and B and a uniform distribution for τ. Consequently, considering a system modeled with a DTMC with p states (i.e., A and B are composed by p rows and p columns) we have the following prior probability distribution for A: \[ P(A) = \prod_{i=1}^{p} D(a_i; \alpha_i) \tag{2} \] where \(a_i = (a_{i,1}, \ldots, a_{i,p})\) is the \(i^{th}\) row of A and \(D(a_i; \alpha_i)\) is a Dirichlet distribution of parameters \(\alpha_i = (\alpha_{i,1}, \ldots, \alpha_{i,p})\) as follows: \[ D(a_i; \alpha_i) = \frac{\Gamma(\sum_{j=1}^{p} \alpha_{i,j})}{\prod_{j=1}^{p} \Gamma(\alpha_{i,j})} \prod_{j=1}^{p} a_{i,j}^{\alpha_{i,j} - 1} \] For matrix B we have a similar formulation: \[ P(B) = \prod_{i=1}^{p} D(b_i; \beta_i) \] where \(b_i\) is the \(i^{th}\) row of B and \(D(b_i; \beta_i)\) is a Dirichlet distribution of parameters \(\beta_i\). Finally, the prior distribution for \(\tau\) is: \[ P(\tau) = \frac{1}{n}, \quad \tau = 1, 2, \ldots, n \tag{3} \] By an appropriate choice of parameters \(\alpha_i, \beta_i\), Dirichlet distributions capture in a simple and well-established way the prior knowledge and beliefs regarding the structures of transitions matrices A, B. There are no alternative multivariate distributions that are analytically tractable and have well known structural properties as Dirichlet distributions. The likelihood of data x is: \[ f(x|A, B, \tau) = \prod_{i=1}^{p} \prod_{j=1}^{p} a_{i,j}^{N_i,j(\tau)} b_{i,j}^{M_i,j(\tau)}, \tag{4} \] where \(N_i,j(\tau)\) is the number of transitions in the trace among state \(i\) and state \(j\) until the change-point \(\tau\) and \(M_i,j(\tau)\) is the number of transitions from state \(i\) to \(j\) after \(\tau\). From Bayes’ Theorem we know that: \[ P(A, B, \tau | x) \propto f(x|A, B, \tau) \times P(A)P(B)P(\tau) \tag{5} \] The desired probability \(P(\tau | x)\) can be obtained by integrating (5) with respect to A and B. This can be extremely difficult to evaluate analytically or even numerically. Gibbs sampling\(^4\) comes into play to solve this issue [5, 6, 15, 27], since it allows us to extract samples from \(P(\tau | x)\), without \(^4\)A description of the Gibbs sampling is beyond the scope of this paper, a brief introduction to this topic can be found in the Appendix. requiring an explicit computation of it, but exploiting the following marginal conditional distributions: \[ P(A|B, \tau, z) \quad P(B|A, \tau, z) \quad P(\tau|A, B, z) \] Let us first compute \( P(A|B, \tau, z) \). By applying Bayes’ Theorem we obtain: \[ P(A|B, \tau, z) = \frac{P(A, B, \tau|z)}{P(B, \tau|z)} = \frac{P(A)f(z|A, B, \tau)}{\int f(z|A, B, \tau)P(dA)} \] (6) Using the expression of the likelihood given in (4) and the prior distribution for \( A \) given in (2), for the numerator in (6) we obtain: \[ P(A)f(z|A, B, \tau) \propto \prod_{i,j=1}^{p} a_{i,j}^{\alpha_{i,j} + N_{i,j}(\tau) - 1} b_{i,j}^{M_{i,j}(\tau)} \] So that: \[ P(A|B, \tau, z) \propto \prod_{i,j=1}^{p} a_{i,j}^{N_{i,j}(\tau) + \alpha_{i,j} - 1} \] (7) We conclude that, conditionally on \( \tau \) and \( z \), \( A \) and \( B \) are independent, i.e. \( P(A|B, \tau, z) = P(A|\tau, z) \), and \[ P(A|\tau, z) = \prod_{i=1}^{p} D(a_{i}; \alpha_{i} + N_{i}(\tau)), \] (7) where \( N_{i}(\tau) \) is the total number of transitions from \( i \) until \( \tau \). Reasoning in the same manner for \( B \), we obtain: \[ P(B|A, \tau, z) = P(B|\tau, z) = \prod_{i=1}^{p} D(b_{i}; \beta_{i} + M_{i}(\tau)), \] (8) where \( M_{i}(\tau) \) is the number of transitions from \( i \) after \( \tau \). Concerning instead \( \tau \), it follows from Bayes’ Theorem that: \[ P(\tau|A, B, z) = \frac{f(z|A, B, \tau)P(\tau)}{\sum_{\tau=1}^{n} f(z|A, B, \tau)P(\tau)} = \frac{f(z|A, B, \tau)}{\sum_{\tau=1}^{n} f(z|A, B, \tau)} \] (9) where (9) holds if \( P(\tau) \) is the uniform distribution given in (3). By exploiting Equations (7)-(9) and providing starting values for \( A, B \) and \( \tau \) we can build a Gibbs sequence for each row of \( A \) and \( B \) and for \( \tau \): \[ a_{k}^{i} \sim D(a_{i}; \alpha_{i} + N_{i,j}(\tau_{k-1})) \] \[ b_{k}^{i} \sim D(b_{i}; \beta_{i} + M_{i,j}(\tau_{k-1})) \] \[ \tau_{k} \sim P(\tau|A^{k}, B^{k}, z) \] where \( a_{k}^{i}, b_{k}^{i}, \) and \( \tau_{k} \) correspond to the \( k^{th} \) sample of the Gibbs sequence. Iterating this sampling process \( N \) times, we obtain a sequence: \( (\tau_{1}, a_{1}^{1}, b_{1}^{1}, \ldots, \tau_{N}, a_{N}^{N}, b_{N}^{N}) \) that converges to \( P(A, B, \tau|z) \). In particular for \( N \) large enough the last values \( \tau_{m+1}, \ldots, \tau_{N} \) where \( m < N \), can be considered as \( N - m \) samples from \( P(\tau|z) \) and can be used to estimate \( \tau \). In the same manner, the sequence \( (a_{1}^{m+1}, \ldots, a_{N}^{m}) \) can be considered as \( N - m \) samples from \( P(A|z) \) and \( (b_{1}^{m+1}, \ldots, b_{N}^{m}) \) as \( N - m \) samples from \( P(B|z) \) used to estimate \( A \) and \( B \), respectively. Table 2: Performance: Discrete Distribution of Response Time (RT). <table> <thead> <tr> <th>RT</th> <th>Before change-point</th> <th>After change-point</th> </tr> </thead> <tbody> <tr> <td>( RT &lt; 0.2 )</td> <td>0.1</td> <td>0.4</td> </tr> <tr> <td>0.2 &lt; ( RT &lt; 0.5 )</td> <td>0.4</td> <td>0.1</td> </tr> <tr> <td>0.5 &lt; ( RT &lt; 0.8 )</td> <td>0.4</td> <td>0.1</td> </tr> <tr> <td>( RT &gt; 0.8 )</td> <td>0.1</td> <td>0.4</td> </tr> </tbody> </table> It is important to notice that we conceived an approach based on the Bayesian estimation theory since simpler approaches cannot achieve the same level of precision. For example, approaches based on rolling averages computed over the streams of data are dependent on the size of the window used to compute the average, which is dependent on the variance of the data (an unknown parameter in our domain). Moreover, simpler approaches based on computing and comparing \( P(A|z) \) against \( P(B|z) \) are less precise since they do not provide any estimate for \( \tau \). 4. VALIDATION A Java implementation of the change-point detection method presented in this paper has been publicly released\(^5\). The method has been validated by simulation using the ImageModder case study. Precisely, we developed a client application that invokes the ImageModder service, we collected execution traces, and we analyzed them. The simulated service can be instructed to change its behavior, by setting change-points that affect its performance and/or reliability. In the real world, the changes may be consequences of software updates. In this section, we describe a significant subset of the simulations we performed to validate the approach. Due to the lack of space, we report on a limited number of cases and restrict our discussion to only one quality attribute, namely performance of the save operation, which is modeled by the DTMC in Figure 2(b). In the experiments we describe below, we changed the server’s response time distribution after specific invocations and thus simulating change-points. Afterwards, we ran a program that implements our change-point analysis approach in different scenarios and with different settings as explained hereafter. The results we report hold for traces of 400 invocations to the save operation tracing the response time, normalized by the size of the saved image. For each case we discuss below, we ran 1000 simulations. The findings obtained by focusing on performance of this operation apply identically to any other operation and to the reliability model. The interested reader may repeat our experiments and perform others by using our downloadable tool. **Single Change-point Detection.** We first report on simulations which consider a single change-point occurring in the observation period. As for the Gibbs sampling, we used the Single Sequence approach and a sequence of length 1000 with a burn-in of length 700. As for prior parameters \( \alpha_{i} \) and \( \beta_{i} \) of the Dirichlet distribution of \( A \) and \( B \) we used, respectively, the DTMC in Figure 2(b) and a DTMC with equally distributed probabilities attached to all outgoing transitions from state 0. \(^5\)http://home.dei.polimi.it/tamburrelli/ChangePoint/ Table 3: Characteristics of Posterior Distribution. <table> <thead> <tr> <th>Gibbs Sequence Length</th> <th>Min</th> <th>Median</th> <th>Mean</th> <th>Max</th> <th>Average Error</th> <th>Max Error</th> <th>$P(140 \leq \tau \leq 160)$</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>141</td> <td>151</td> <td>151.8227</td> <td>162</td> <td>0.01215141</td> <td>0.01316296</td> <td>0.999967</td> </tr> <tr> <td>1000</td> <td>140</td> <td>151</td> <td>151.8181</td> <td>165</td> <td>0.01212084</td> <td>0.01540741</td> <td>0.9999689</td> </tr> </tbody> </table> Figure 3: Average Posterior Distributions of $\tau$ with change-point at 150. The latter two choices represent the worst possible scenario in which we have no clue about where the change-point will potentially take place and which will be the new performance model. Finally, we changed the server’s response time distribution in the 150th invocation as shown in Table 2 (i.e., the 150th invocation represents the change-point). By running our change-point analysis method in the aforementioned scenario we obtained the average posterior distribution for $\tau$ illustrated in Figure 3 over 1000 simulations. The posterior distribution of $\tau$ is concentrated around 150, as expected. More precisely, it presents the characteristics listed in row 2 of Table 3. By synthesizing an estimation for $\tau$ with the mean value of the posterior distribution, we obtain an average estimate equal to 151.8181. By computing the posterior probability $P(140 \leq \tau \leq 160)$, as shown in last column of Table 3, we are at least 99% confident that a change-point took place between 140 and 160. Gibbs Sequence Length. We now report on simulations concerning the length of the Gibbs sequence. Intuitively, the longer the sequences are the more precise our distribution and our estimate will be. As shown in Table 3, sequences of length equal to 1000 give a very accurate result. However, we performed the same testbed with shorter sequences to stress the robustness of the approach (we decreased the number of samples down to 100). In this extreme case, illustrated in row 1 of Table 3, the mean value (i.e., the change-point estimate) remains close to 150. Generally speaking, the accuracy of estimates begins to decay using sequences shorter then 400, as shown in Figure 4, where the estimation error ($E$) is computed as: $$E = \frac{|ec - rc|}{rc}$$ $ec$ being the estimated change-point and $rc$ being the real change-point. Notice that the average estimation error with sequences longer then 400 is always less then 0.01214. From Table 3, we can conclude that our approach is quite robust, since the average and maximum estimation errors are almost negligible. Finally, the appropriate Gibbs sequence length can be automatically determined by adopting the Potential Scale Reduction Factor as described in [16]. Estimates of $A$ and $B$. The considerations made for the estimate of $\tau$ holds also for the estimates of $A$ and $B$. Figure 5(a) shows the histogram of the transition from state 0 to state 1, with response time $< 0.2$, over 1000 simulations. It corresponds to the mean value of the posterior distribution computed exploiting samples in the Gibbs sequence. Notice how the average estimate is concentrated around the real value which generated data (i.e., 0.1). Figure 5(b) shows the same data obtained for the transition from state 0 to state 1 in matrix $B$, which is concentrated around the desired value 0.4. By adopting again the mean value of the posterior distribution to estimate transition probabilities, we obtain Figure 4: Average Estimation Error. Figure 5: Average Posterior Distribution for Transition 0-1. an average value of 0.096 for transition 0 – 1 before the change point, with respect to the real value of 0.1, which corresponds to an average estimation error equal to 0.04. **Change-point Location.** We made experiments to assess the robustness of the change-point detection method with respect to the location of the occurrence of the change-point in the trace. Figure 6 shows the posterior distribution of \( \tau \) when the change described in Table 2 occurs at the 5th invocation of the `save` operation. The distribution is correctly centered around the expected value of 5 and thus it is possible to correctly estimate the presence of a change-point in this position by adopting again the mean value of the posterior distribution. **No Change-point.** Let us now consider the scenario in which no change-point occurs in the trace; i.e., the service constantly behaves as described by the second column in Table 2. In this setting, by running the change-point detection we obtained the posterior distribution illustrated in Figure 7. As we said in Section 3.3, \( \tau = 1 \) or \( \tau = n \) indicate that no change occurred in the trace. Figure 7 shows that the posterior distribution is centered around the beginning of the trace, which approximates \( \tau = 1 \) and thus indicates no change. **Sensitivity to the Initial Model Values.** In the previous experiments, the initial values for matrix \( B \) a DTMC are equally distributed probabilities attached to all outgoing transitions from state 0, which represent the worst possible scenario in which we have no clue about which will be the new performance model. Figure 5(b) shows that the change-point detection algorithm produces precise estimates of transitions in matrix \( B \) even though such initial values reflect inaccurate knowledge. The initial value of matrix \( A \) also does not affect the robustness of the method. In fact, we first ran the detection method by initializing matrix \( A \) with the data of columns 2 in Table 2, which correspond to the actual initial values of performance of the `save` operation (a “perfect knowledge” situation), and then we ran the method with opposite initial values, i.e. \( (0.4, 0.1, 0.1, 0.4) \). Figure 8 shows the convergence of the Gibbs sequence which estimates transition 0 – 1 of matrix \( A \). The figure shows that the convergence proceeds similarly in both cases, except for initial fluctuations in the case of inaccurate initial values. **Multiple Change-points.** We performed simulations to check the behavior of the method if multiple change-points occur in the observation period. We injected a change-point at invocations 150 and 250, over the 400 invocations. Figure 9 shows that as a result of simulations, the posterior distribution of \( \tau \) is clearly a bi-modal distribution: each peak in the distribution indicates a change-point. Once these peaks are identified, it is possible to divide the trace in two distinct traces which separate the peaks and, by re-running the change-point detection, we can reduce the problem to the case of a single change-point. **Performance and Multiple Sequence Gibbs Sampling.** The proposed approach is quite efficient. The computation of a single Gibbs sequence of length 1000 with a burn-in of 700 requires about 3.5 seconds to analyze a trace of 400 invocations on a conventional workstation\(^6\). Execution time can be reduced by examining shorter sequences and by applying the Multiple Sequence variant. It is possible to show (but \(^6\)Intel®Core 2 Duo with 4Gb RAM. Implementation in Java v1.6. space reasons do not permit it) that the variant produces estimates of equal accuracy, but improves time efficiency. For example, the same workstation can execute the parallel computation of two Gibbs sequences of length 500 with a burn-in of 300 in about 1.7 second to analyze the same trace of length 400. 5. RELATED WORK Change-points are abrupt variations in the generative parameters of a data stream. Their identification has found important application in several disciplines, such as finance, biometrics, and robotics (e.g., see [20] or [28]). In particular, many works in the area of intrusion detection systems aim at detecting when a change/violation occurs given a trace log and exploiting Bayesian techniques as described for example in [4]. To the best of our knowledge, no existing work applied these concepts to software reliability and performance, and in particular to SOC. DTMCs and other stochastic models are increasingly used to assess dependability of software artifacts (e.g., see [19]) and to predict service performance and reliability (e.g., see [25]). The problem of dealing with changes in the external services used by a composite service, and adapting the parameters of its quality model accordingly, is studied in [9, 18]. A framework for component reliability prediction in presented in [8], whose objective is to construct and solve a stochastic reliability model, through which software architects may explore competing designs. In particular, the authors tackle the definition of reliability models at architectural level and the problems related to parameter estimation. Other complementary approaches investigate alternative methods for calibrating model parameters at run time in the context of performance models [29]. The problem of quality of service in SOC is studied by several authors, who focus on how quality can be specified and how it can be the basis for verifiable SLAs. A language for SLAs has been proposed by [26]. Other related areas deal with monitoring and verifying services and service compositions [2, 13, 14]. Run-time verification is another closely related research area (for example, see [7]). Several approaches have been proposed in literature that deal with non-functional aspects of services and their composition. In particular, [22] illustrates a framework for modeling and evaluating service-oriented applications and [21] describes performance prediction in the SOC domain exploiting Queuing Networks. An approach for verifying service compositions starting from UML descriptions and then transforming them into a specific representation that allows validation with respect to concurrency properties is presented in [12]. A similar approach is described in [11], which shows how to verify service integrations in case of resource constraints, with respect to safety and liveness properties. Concerning the statistical techniques we adopted for change-point detection, Carlin et al. in [5] provide a hierarchical Bayesian analysis of change-point problems that inspired our work and suggested the adoption of Gibbs sampling. In particular, concerning this integration method, Casella et al. in [6] provide a complete discussion about its properties by examples. 6. CONCLUSION AND FUTURE WORK In this paper we addressed the problem of identifying changes concerning the non-functional behavior of software services managed by third-party entities and considered as black-box artifacts. We defined the concept of change-point and provided a statistical technique aimed at identifying them given an execution trace extracted by running instances of the system. Change-point detection was performed concerning reliability and performance through the adoption of DTMCs. We implemented a tool supporting change-point analysis as part of the KAMI framework—a toolset which is illustrated in [9, 18]. The tool has been used to validate the method via simulations. We performed extended simulations, but for space reasons we could only report on selected cases. For instance, we omitted some interesting results concerning the relation between the length of the trace and the range of values of probabilities appearing in the models, and the relation between the length of the trace and the distance between different change-points (in a multiple change-point setting). In the future, we plan to complement the simulation-based validation with analysis of existing on-line services, to obtain quantitative result in a real-world setting. In addition, we will investigate how and when change-point detection can be run to support on-line reactions to detected changes and, more generally, we will explore the trade-offs between on-line and off-line change detection and their dependence on the temporal behavior of the application. Further work may also apply change-point analysis to other models such as Queuing Networks or continuous Markov chains. Appendix: Gibbs Sampling Gibbs sampling is an integration method aimed at computing characteristics (such as the mean or variance) of the marginal density \( f(x) \) of a joint density \( f(x, y_1, \ldots, y_m) \) without requiring to actually compute the integral \( f(x) = \int \ldots \int f(x, y_1, \ldots, y_m) \, dy_1 \, dy_2 \ldots dy_m \), which can be extremely difficult to perform analytically or even numerically. In particular, Gibbs sampling allows for generating a sample \( X_1, \ldots, X_n \) from \( f(x) \), without calculating \( f(x) \). This is because Gibbs sampling works with the (univariate) conditional distributions of every random variable \( X, Y_1, \ldots, Y_m \) given all the other ones: \( f(x|y_1, \ldots, y_m), f(y_1|x, y_2, \ldots, y_m), \ldots, f(y_m|x, y_1, \ldots, y_{m-1}) \). To briefly illustrate how Gibbs sampling works, let us consider the simple two-variables case in which we extract samples from the marginal distribution \( f(x) \) of a joint density \( f(x,y) \), by sampling from the univariate conditional densities \( f(x|y) \) and \( f(y|x) \). The sampler starts with some initial value \( y_0 \) and generates \( x_0 \) by sampling from \( f(x|y = y_0) \). Thus the sampler uses \( x_0 \) to generate a new value \( y_1 \), drawing from \( f(y|x = x_0) \). Hence, the sampler proceeds as follows: \[ x_i \sim f(x|y = y_{i-1}) \] \[ y_i \sim f(y|x = x_i) \] By iterating this sampling process \( k \) times, we obtain a Gibbs sequence of length \( k \): \((x_1, y_1), \ldots, (x_k, y_k)\). If we think of each \((x_i, y_i)\) as a realization of a random vector \((X_i, Y_i)\), then, under mild conditions, as \( k \to \infty \) the distribution of \((X_k, Y_k)\) converges to the joint density \( f(x, y) \) (independently of the starting value \( y_0 \)) and hence the distribution of \( X_k \) converges to marginal distribution \( f(x) \) (see for example [27]). Hence, for large enough \( k \), the last values \( x_{h+1}, \ldots, x_k \) \((h < k)\) of the Gibbs sequence can be considered as \( k - h \) samples from \( f(x) \). It is important to repeat the sampling process a sufficient number of times to have a large Gibbs sequence and to ignore (as shown by Figure 10(a)) the initial samples (burn-in removal) which are not distributed according to \( f(x, y) \) and are influenced by the starting values. Alternatively, instead of collecting samples from a large enough Gibbs sequence, Gelfand et al. in [15] suggest the generation of \( m \) independent Gibbs sequences of length \( k \) and then using the \( m \) final values of these sequences, as shown by Figure 10(b). The choice of \( k \) and alternative approaches to extracting information from the Gibbs sequence, are discussed in [6]. A complete description of the Gibbs sampling is beyond the scope of this paper, further details can be found in [5, 6, 15, 27]. We explicitly briefly introduced and justified it because it is a crucial component of our solution. Acknowledgments This research has been partially funded by the European Commission, Programme IDEAS-ERC, Project 227977-SMSCom. 7. REFERENCES
{"Source-Url": "http://giordano.webfactional.com/wp-content/uploads/2011/11/fse022s-epifani.pdf", "len_cl100k_base": 11331, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 40561, "total-output-tokens": 13330, "length": "2e13", "weborganizer": {"__label__adult": 0.00031876564025878906, "__label__art_design": 0.0005316734313964844, "__label__crime_law": 0.0004324913024902344, "__label__education_jobs": 0.0012979507446289062, "__label__entertainment": 0.00010973215103149414, "__label__fashion_beauty": 0.00015747547149658203, "__label__finance_business": 0.0004820823669433594, "__label__food_dining": 0.0003736019134521485, "__label__games": 0.0007762908935546875, "__label__hardware": 0.00104522705078125, "__label__health": 0.0007410049438476562, "__label__history": 0.0003886222839355469, "__label__home_hobbies": 0.0001112222671508789, "__label__industrial": 0.0004420280456542969, "__label__literature": 0.00047969818115234375, "__label__politics": 0.0002760887145996094, "__label__religion": 0.0003712177276611328, "__label__science_tech": 0.133056640625, "__label__social_life": 0.00010275840759277344, "__label__software": 0.0166168212890625, "__label__software_dev": 0.8408203125, "__label__sports_fitness": 0.00024271011352539065, "__label__transportation": 0.0005006790161132812, "__label__travel": 0.00022685527801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52088, 0.03185]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52088, 0.38163]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52088, 0.90164]], "google_gemma-3-12b-it_contains_pii": [[0, 2106, false], [2106, 9220, null], [9220, 15499, null], [15499, 22745, null], [22745, 26865, null], [26865, 32820, null], [32820, 36472, null], [36472, 40069, null], [40069, 46314, null], [46314, 52088, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2106, true], [2106, 9220, null], [9220, 15499, null], [15499, 22745, null], [22745, 26865, null], [26865, 32820, null], [32820, 36472, null], [36472, 40069, null], [40069, 46314, null], [46314, 52088, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52088, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52088, null]], "pdf_page_numbers": [[0, 2106, 1], [2106, 9220, 2], [9220, 15499, 3], [15499, 22745, 4], [22745, 26865, 5], [26865, 32820, 6], [32820, 36472, 7], [36472, 40069, 8], [40069, 46314, 9], [46314, 52088, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52088, 0.11538]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a55feb8f3de6fd1f1a782f60008ac6b1d78d4290
A Modal Logic for Abstract Delta Modeling Frank de Boer CWI, Amsterdam Leiden University The Netherlands f.s.de.boer@cwi.nl Michiel Helvensteijn CWI, Amsterdam Leiden University The Netherlands michiel.helvensteijn@cwi.nl Joost Winter CWI, Amsterdam The Netherlands j.winter@cwi.nl Abstract Delta Modeling is a technique for implementing (software) product lines. Deltas are put in a partial order which restricts their application and are then sequentially applied to a core product in order to form specific products in the product line. In this paper we explore the semantics of deltas in more detail. We regard them as relations between products and introduce a multi-modal logic that may be used for reasoning about their effects. Our main innovation is a modality for partially ordered sets of deltas. We prove strong completeness results on both the frame level and the model level and demonstrate the logic through an example. 1. INTRODUCTION Delta Modeling [12, 13, 14] is designed as a technique for implementing software product lines [11]: a way to optimally reuse code between software products which differ only by which features they support. The code is divided into units called deltas, which can incrementally transform a core product in order to generate a product in the product line. Clarke et al. [4, 5] described delta modeling in an abstract algebraic manner called the Abstract Delta Modeling (ADM) approach. In that work, delta modeling is not restricted to software product lines, but rather product lines of any domain. It gives a formal description of deltas, how they can be applied to products, how they can be combined, how they can be linked to features from the feature model, as well as how to avoid and resolve implementation conflicts. Most notably, they put deltas in a partial order to restrict their order of application. This allowed for an exact specification of dependency between deltas, as well as the implementation of desired feature interaction and the resolution of conflict with a minimum of code duplication. At its core, ADM is about deltas that can transform one product into another product. We need a way to specify and reason about the semantics of deltas, and what effect Figure 1: Example view of a delta frame with products p, q, r and deltas u, v, w, x, y, z currently visible they have on the features that are supported by a product. We need a way to specify that a delta implements a specific feature or that a delta refrains from breaking an existing feature. We need a way to prove that if certain local guarantees are met, that specific global properties, such as product line completeness [8, 9], are then guaranteed to hold. In this paper we introduce a modal logic in order to reason about the semantics of deltas. Basically, we take the set of all possible products as the set of worlds in our frame (Figure 1). We then model deltas as binary relations on this set. In previous work, all deltas were deterministic (functional). We now generalize the notion of delta, and allow them to be nondeterministic, as well as non-terminating. In our logic, we want to be able to make judgements such as \[ \vdash \langle d \rangle f \quad \vdash [d] f \] meaning “delta d may implement feature f” (left) and “delta d must implement feature f” (right). Or perhaps, for all φ: \[ \vdash \langle d \rangle \phi \land [d] \phi \quad \vdash [d] \phi \rightarrow \langle d \rangle \phi \] meaning “delta d is deterministic” (left) and “delta d always terminates” (right). Note that we implicitly quantify over all products that the delta may be applied to. We also introduce an additional modality, representing delta models (partially ordered sets of deltas, Definition 4), in order to make judgements such as \[ \vdash [DM] (f \land g \land h) \] meaning “delta model DM implements features f, g and h”. The paper is structured as follows. Sections 2 and 3 summarize the relevant theory of abstract delta modeling and modal logic respectively. Section 4 introduces both the syntax and semantics of our modal logic on a frame level. It also proves strong completeness. Then, Section 5 introduces proposition letters and explores our logic on a model level. Section 6 concludes and discusses related and future work. 2. ABSTRACT DELTA MODELING To make this paper self-contained, we now repeat the relevant theory from ADM. For more detailed information, we refer the reader to [4, 5]. Readers familiar with the theory can skip this section. 2.1 Products and Deltas First, we assume a set of products, $P$. The set of possible modifications to products forms a delta monoid, as follows: **Definition 1 (Delta Monoid).** A delta monoid is a monoid $(D, \cdot, e)$, where $D$ is a set of product modifications (referred to as deltas), and the operation $\cdot : D \times D \rightarrow D$ corresponds to their sequential composition. $y \cdot x$ denotes the modification applying first $x$ and then $y$. The neutral element $e$ of the monoid corresponds to modifying nothing. Applying a delta to a product results in another product. This is captured by the notion of delta action. The following definition differs from previous work [4, 5], in which deltas were always deterministic, and would always terminate. The notion of nondeterministic delta action allows for nontermination, by resulting in a term. The notion of nondeterministic delta action allows for both nondeterminism and nontermination, by resulting in a set of products, rather than a single product. **Definition 2 (Non-Deterministic Delta Action).** A nondeterministic delta action is an operation $\Delta : D \times P \rightarrow \mathcal{P}(P)$. If $d \in D$ and $p \in P$, then $d(p) \subseteq P$ is the set of products that may result from applying delta $d$ to product $p$. It satisfies the conditions $(y \cdot x)(p) = \bigcup_{q \in y(p)} y(q)$ and $e(p) = \{p\}$. This all leads to the notion of a deltoid, which describes all building blocks necessary to create a product line in a concrete domain. **Definition 3 (Deltoid).** A deltoid is a quintuple $(P, D, \cdot, e, \langle \rangle)$, where $P$ is a product set, $(D, \cdot, e)$ is a delta monoid and $\langle \rangle$ is a non-deterministic delta action operator. A delta model describes the set of deltas required to build a specific product, along with a strict partial order on those deltas, restricting the order in which they may be applied. **Definition 4 (Delta Model).** A delta model is a pair $(D, \prec)$, where $D \subseteq D$ is a finite set of deltas and $\prec \subseteq D \times D$ is a strict partial order on $D$. $x \prec y$ states that $x$ must be applied before $y$, though not necessarily directly before. The partial order represents the intuition that a delta applied later has full access to earlier deltas and more authority over modifications to the product. The semantics of a delta model is defined by its derivations. A derivation is a delta formed by a sequential composition of all deltas from $D$, in some linearization of the partial order. **Definition 5 (Derivation).** Given a delta model $DM = (D, \prec)$, its derivations are defined to be $$\text{deriv}(DM) = \left\{ x_0 \cdot \ldots \cdot x_n \mid x_1, \ldots, x_n \text{ is a linear extension of } (D, \prec) \right\}.$$ Observe that when $D$ is empty, $\text{deriv}(DM) = \{\}$. 3. MODAL LOGIC In this section, we recall a number of essential notions from the theory of modal logic [2]. We define the basic language, its semantics and the syntactic notion of a proof in general terms. Following this, in the next section, we will instantiate this theory with a language in which the modalities correspond to the deltas from our underlying abstract delta modeling framework. 3.1 Language and Semantics We will be concerned with a basic multi-modal language in which we have a set of proposition letters, and a set of labeled modalities. In order to keep the story simple and accessible, we will only concern ourselves with unary modalities, as well as, in Section 5, nullary modalities which can be regarded as playing the role of propositional constants. In principle, however, modalities can have any arity. This basic modal language consists of the following terms: $$\phi ::= \bot \mid p \mid \phi \land \psi \mid \neg \phi \mid \langle \rangle \mid (d) \phi$$ Here, $(d)$ is any unary modality labeled with $d$, $\langle \rangle$ is any nullary modality labeled with $p$ and $p$ is any proposition letter taken from a set $\Xi$ of proposition letters. A frame $F$ over this language consists of a set $W$ of worlds and, for each nullary modality $\langle \rangle$ a predicate $U_{\langle \rangle} \subseteq W$ and for each unary modality $(d)$, a binary relation $R_d \subseteq W \times W$. A model $M$ over a frame consists of a frame and a valuation function $V : \Xi \rightarrow \mathcal{P}(W)$, mapping proposition letters to sets of worlds. We can now, given a model $M$ and world $w \in W$, define the modal satisfaction relation $\models$ as follows: $$M, w \models \bot \text{ never}$$ $$M, w \models p \text{ if } w \in V(p)$$ $$M, w \models \phi \lor \psi \text{ if } M, w \models \phi \text{ or } M, w \models \psi$$ $$M, w \models \neg \phi \text{ if } \neg M, w \models \phi$$ $$M, w \models \langle \rangle \text{ if } w \in U_{\langle \rangle}$$ $$M, w \models (d) \phi \text{ if there exists a } v \in W \text{ with } (v, w) \in R_d \text{ and } M, v \models \phi$$ We regard $\phi \land \psi$, $\phi \lor \psi$ and $(d) \phi$ as abbreviations for $\neg (\neg \phi \lor \neg \psi)$, $\neg \phi \lor \psi$ and $\neg (d) \neg \phi$, respectively. We furthermore write $M \models \phi$ and say that $\phi$ is *globally true* in $M$ if for all worlds $w$, we have $M, w \models \phi$. Given a frame $F$, we write $F, w \models \phi$ and say $\phi$ is valid at world w if for all models M based on Σ, we have M, w ⊨ φ. We furthermore write Σ ⊨ φ and say φ is valid on Σ if for all worlds w, we have Σ, w ⊨ φ. When we want to restrict the semantic entailment to a certain class of structures S, we superscribe ⊨ with S, as in ⊨S. Given a set of formulas Γ and a class of structures S (either models or frames), we say that φ is a local consequence of Γ, and write Γ ⊨S φ, iff, for all models M from S, and all worlds w ∈ W: \[ M, w ⊨ φ \quad \text{whenever} \quad M, w ⊨ Γ. \] Likewise, given a set of formulas Γ and a class of structures S, we say φ is a global consequence of Γ, and write Γ ⊨ S φ, iff, for all models M from S, we have \[ M ⊨ φ \quad \text{whenever} \quad M ⊨ Γ. \] ### 3.2 Proof Theory **Definition 7 (Normal Modal Logic).** Given any modal language, a normal modal logic is a set of formulas Λ containing all propositional tautologies, the formula K: \[ [d] (p → q) → ([d] p → [d] q), \] the formula Dual: \[ ([d] p) ↔ ¬[d] ¬p \] (for all modalities d) and closed under: - Modus ponens: if φ ∈ Λ and φ → ψ ∈ Λ, then ψ ∈ Λ; - Uniform substitution: if φ ∈ Λ, then φ[ψ/p] ∈ Λ for all proposition letters p and formulas φ; and - Generalization: if φ ∈ Λ, then [d] φ ∈ Λ for all modalities d. Given any set of formulas Γ, a smallest normal modal logic containing all formulas in Γ always exists, and will be called the normal modal logic generated by Γ. Given a normal modal logic Λ, we write \[ \vdash Λ φ \] to denote φ ∈ Λ, and \[ Γ \vdash Λ φ \] to express that there are formulas ψ1, ..., ψn such that \[ Γ \vdash Λ \left( \bigwedge_{1 ≤ i ≤ n} \psi_i \right) → \phi. \] Alternatively, we can also regard the relation ⊨ as a proof system. Here, we regard K and Dual, together with all propositional tautologies as axioms, and regard the earlier closure properties (modus ponens, uniform substitution, and generalization) as proof rules. A normal modal logic Λ is called strongly complete with respect to a class S of frames, if, when for any set of formulas Γ and any formula φ, Γ ⊨S φ implies Γ ⊨ Λ φ. The normal modal logic K, generated by the empty set, is strongly complete with respect to the class of all frames [2]. ### 4. Delta Frames One of the primary goals of this paper is to reason about abstract delta modeling using the language and techniques of modal logic. A good starting point, before moving on to an axiomatic characterization (in which we are concerned with issues such as completeness), is to describe delta modeling using Kripke frames. #### 4.1 Relational Deltas For the convenience of the formalism described in the remainder of the paper, we now start working in a more concrete deltoid, in which deltas are relations between products. **Definition 8 (Relational Deltoid).** A relational deltoid \((P, D, \epsilon, \lambda, \neg, \cdot)\) is a deltoid in which \(D = \mathcal{P}(P × P)\). For a complete characterization of the deltoid and a solid link to earlier work [4, 5], we also need to define delta action (Definition 2) concretely, but this is quite straightforward. **Definition 9 (Relational Delta Action).** A relational delta action is an operation \(\cdot : D × \mathcal{P} \rightarrow \mathcal{P}(P)\) such that for all \(d ∈ D\) and all \(p ∈ P\): \[ d(p) =_{def} \{ q ∈ P \mid (p, q) ∈ d \} \] This implicitly defines sequential composition \(·\) as relation composition and the empty delta \(\epsilon\) as the identity relation. The paper loses no generality with this approach. The only real difference is that there can no longer exist multiple distinct deltas that represent the same relation. #### 4.2 Delta Terms We define the set of delta terms (which can be seen as the syntactic counterparts of deltas) as the smallest set such that: 1. Every delta has a corresponding basic delta term \(d\), 2. Given delta terms \(d_1\) and \(d_2\), \(d_1 \cdot d_2\) and \(d_1 \cup d_2\) are delta terms, and 3. Given a finite set \(D\) of delta terms, and a partial order \(≺ : D × D\), \((D, \prec)\) is a delta term. From here onward, we use the set of delta terms to label our set of unary modalities, i.e. for each delta term \(d\), there exist unary modalities \([d]\) and \([d]\). We are not using nullary modalities yet, but they become useful in Section 5.2. #### 4.3 Frames and Relations A concrete relational deltoid uniquely defines a delta frame \(Σ = (W, R_{d_1}, \ldots)\). The set of worlds \(W\) is the set of products \(P\) and the set of binary relations \(R_d\) is the set of deltas \(D\). **Definition 10.** The relation \(R_d\) is the delta corresponding to basic delta term \(d\). We define the binary relations corresponding to compound delta terms inductively, in terms of basic delta terms. First, union and composition: \[ R_{d_1 \cup d_2} \begin{array}{ll} \text{set} \end{array} = R_{d_1} \cup R_{d_2} \] \[ R_{d_1 \cdot d_2} \begin{array}{ll} \text{set} \end{array} = \{ (p_1, p_2) \mid (p_2, p_1) \in R_{d_2} \land (p_1, p_2) \in R_{d_1} \} \] Finally, the binary relation corresponding to a partial order \((D, \prec)\) on delta terms can be described in terms of derivations of this partial order as follows: \[ R_{(D, \prec)} \begin{array}{ll} \text{set} \end{array} = \bigcup_{d \in \text{deriv}((D, \prec))} R_d \] Using \( \text{deriv} \) (Definition 5) here is a bit of an abuse of notation, as it is defined on deltas, not delta terms. However, a delta term version can be defined analogously. Note that if the relations corresponding to the delta terms in \( D \) are deterministically functional, and the partial order \((D, \prec)\) has a unique derivation, the relation \( R(D, \prec) \) is deterministic as well. Note also that we can characterize composition in terms of partial orders: \[ R_{d_2, d_1} = R_{\{d_1, d_2\}, \{\{d_1, d_2\}\}} \] and, conversely, we can characterize partial orders in terms of union and composition. **Definition 11 (Delta frames).** Let \( \Delta F \), the class of delta frames, be the class of all frames, with a underlying set of delta terms as modalities, satisfying the relational equalities from Definition 10. We now introduce the following useful notation: **Notation 12.** For a given partially ordered set \( DM = (D, \prec) \) and subset \( D' \subseteq D \), we define the notation: \[ DM \setminus D' \overset{df}{=} (D \setminus D', \prec') \] where \( \prec' \) is \( \prec \) restricted to \( D \setminus D' \). From Definition 10, the following proposition follows straightforwardly: **Theorem 13.** Given a nonempty delta model \( DM = (D, \prec) \) and any formula \( \phi \), we have \[ \models_{\Delta F} \langle DM \rangle \phi \iff \bigvee_{d \in \text{minimal}} \langle d \rangle \langle DM \setminus \{d\} \rangle \phi \] and for the empty delta model \( \langle \emptyset, \emptyset \rangle \), we have \[ \models_{\Delta F} \langle \langle \emptyset, \emptyset \rangle \rangle \phi \iff \phi \] **Proof.** Induction on the size of \( D \). \( \square \) It is worthwhile to note that the above theorem is similar to what is known as the expansion law of the process algebra CCS [10]. Because delta models are finite and do not contain cycles in our case, the expansion law in combination with other axioms allows a complete reduction to basic delta terms, as explained in more detail below. Dually, the semantic entailment \[ \models_{\Delta F} [DM] \phi \iff \bigwedge_{d \in \text{minimal}} [d] [DM \setminus \{d\}] \phi \] is also valid as a direct consequence of Theorem 13. In the next section we will discover that the normal modal logic generated by these formulas (together with axioms for union and composition) is strongly complete with respect to the class of delta frames. ### 4.4 Completeness **Definition 14.** Define the modal logic \( K\Delta \) as the smallest normal modal logic containing all instances of the following axiom schemata: 1. \( \langle DM \rangle \phi \iff \bigvee_{d \in \text{minimal}} \langle d \rangle \langle DM \setminus \{d\} \rangle \phi \) (nonempty \( DM \)) 2. \( \langle \langle \emptyset, \emptyset \rangle \rangle \phi \iff \phi \) 3. \( \langle d_2 \cdot d_1 \rangle \phi \iff \langle d_1 \rangle \langle d_2 \rangle \phi \) 4. \( \langle d_1 \cup d_2 \rangle \phi \iff \langle d_1 \rangle \langle d_2 \rangle \phi \) We call instantiations of these axiom schemata \( \Delta \) axioms'. These allows us to formulate the following completeness result, after defining a translation function \( t \) as follows (that \( t \) is well-defined trivially follows from defining a fitting complexity function on formulas): **Definition 15.** \[ \begin{align*} t(f) & \overset{df}{=} f \quad \text{for proposition letters } f \\ t(\neg \phi) & \overset{df}{=} \neg t(\phi) \\ t(\phi \lor \psi) & \overset{df}{=} t(\phi) \lor t(\psi) \\ t(\langle d \rangle \phi) & \overset{df}{=} \bigvee_{d \in \text{deriv}(DM)} t(\langle d \rangle \phi) \end{align*} \] The idea behind this function is to translate any formula into an equivalent formula in which all unary modalities are labeled only by basic delta terms. This enables us to forget about compound delta terms, allowing us to construct our completeness proof in terms of the completeness of \( K \) w.r.t. the class of all frames. **Lemma 16.** For all \( \Gamma \) and \( \phi \), we have: 1. \( \Gamma \models_{\Delta F} \phi \iff \Gamma \models_{\Delta F} t(\phi) \) 2. \( \Gamma \models_{K\Delta} \phi \iff \Gamma \models_{K\Delta} t(\phi) \) 3. \( \Gamma \models_{\Delta F} t(\phi) \iff \Gamma \models t(\phi) \) **Proof.** The first and second part of the lemma can be proven by induction (on the complexity of formulas as well as that of delta terms); the third part follows from the observation that for any translated formula, only the relations corresponding to basic delta terms are used: hence, we are simply treating our delta frame as a regular frame. \( \square \) **Theorem 17.** \( K\Delta \) is strongly complete w.r.t. the class of delta frames. **Proof.** This amounts to saying that, for any \( \Gamma \) and \( \phi \), if \( \Gamma \models_{\Delta F} \phi \), then \( \Gamma \models_{K\Delta} \phi \). But, if \( \Gamma \models_{\Delta F} \phi \), then, by the first part of Lemma 1, we have \( \Gamma \models_{\Delta F} t(\phi) \), and by part 3 of Lemma 1, we now have \( \Gamma \models t(\phi) \). Completeness of \( K \) now gives \( \Gamma \models K t(\phi) \), and because \( K \subseteq K\Delta \), we also get \( \Gamma \models_{K\Delta} t(\phi) \). Finally, part 2 of Lemma 1 now yields \( \Gamma \models_{K\Delta} \phi \). \( \square \) ### 5. MODELS ON DELTA FRAMES As we can now reason on the frame level with the proof system of Section 4, we would also like to reason on the model level. Recall that a model \( M = (\mathcal{F}, V) \) is a frame augmented with a valuation function which maps proposition letters from \( \Sigma \) to the set of worlds in which they are true. Our worlds are products from \( \mathcal{P} \). What we want to reason about is the features that are implemented by those products, so we propose that \( F \subseteq \mathcal{P} \). We would like to prove properties about the effect of deltas on specific features given axiomatic characterizations of specific models. 5.1 Semantic Feature Model In Definition 6 we see features as labels. A feature model $\Phi$ indicates which features are allowed to be selected together on a conceptual level. However, if we have $M, w \models_{DF} f$ for some $f \in F$, it means that feature $f$ is actually implemented in product $w$. It is a semantic judgment. An interesting relation exists however. A (syntactic) feature model is only sensible if all of its feature configurations can actually be implemented. We define a semantic feature model as follows: **Definition 18 (Semantic Feature Model).** Given a model $M$, we define its semantic feature model $\Phi_M \subseteq P(F)$ as the set of sets of features that can semantically be implemented together: $$\Phi_M \overset{\text{def}}{=} \{ V'(w) \cap F \mid w \in W \}$$ where $V' : W \rightarrow \mathcal{P}(\Xi)$ is the function mapping each world to the set of proposition letters that are true there: $$V'(w) \overset{\text{def}}{=} \{ p \in \Xi \mid w \in V(p) \}$$ We expect a sensible syntactic feature model to be a subset of the semantic feature model: $$\Phi \subseteq \Phi_M$$ meaning that all valid feature configurations contain only features that can potentially be implemented together. 5.2 Proof System Note that the proof system from Section 4 is not sound with respect to global semantic entailment on models. For example, consider the following ‘proof’: 1. $f \rightarrow [d] g$ axiom 2. $f \rightarrow [d] \neg g$ uniform substitution on $g$ So we have $$f \rightarrow [d] g \vdash_{K} f \rightarrow [d] \neg g,$$ but at the same time the (global) semantic consequence $$f \rightarrow [d] g \vdash_{\Delta F} f \rightarrow [d] \neg g$$ is easily seen to be false. The culprit here is our usage of uniform substitution. This proof rule produces new validities from old validities, but it does not preserve truth on a model level. We still need the uniform substitution rule, however, to prove truths such as: 1. $p \lor \neg p$ propositional tautology 2. $[d] f \lor \neg [d] f$ uniform substitution on $p$ The trick is to allow uniform substitution only on newly produced proposition-letters, but not on the original features in our axioms. This may be accomplished by first transforming all feature propositions in our axioms to nullary modalities, on which uniform substitution does not apply. We can then prove any valid formula in the proof system of frames. We first define the following translation: **Definition 19.** $$u(f) \overset{\text{def}}{=} \emptyset$$ for proposition letters $f$ $$u(\neg \phi) \overset{\text{def}}{=} \neg u(\phi)$$ For the other shapes of formulas the $u$ translation is simply propagated down to the proposition letters, leaving everything else unchanged. We also lift the function $u$ to sets of formulas in the expected manner. Furthermore, we also define a translation function (overloading the earlier name $u$) from models to frames, dropping the valuation function $V$ but augmenting the frame with, for every proposition letter in the model, a unary relation (representing a nullary modality) which holds at precisely the worlds in which this proposition letter was true in $V$. This enables us to formulate the following translation lemma: **Lemma 20.** For all models $M$, worlds $w$ and formulas $\phi$: $$M, w \models \phi \iff u(M), w \models u(\phi)$$ and $$M \models \phi \iff u(M) \models u(\phi).$$ **Proof.** Induction on the complexity of formulas. The basic propositional case trivially follows from our construction of nullary modalities in terms of propositional letters. This lemma enables us to prove the following soundness result w.r.t. global truth on the model level: **Theorem 21.** For all sets of formulas $\Gamma$ and all formulas $\phi$: $$\text{if } u(\Gamma) \vdash_{K} \Delta F u(\phi) \text{, then } \Gamma \vdash_{\Delta F} \phi$$ **Proof.** Assume $u(\Gamma) \vdash_{K} \Delta F u(\phi)$. Let $M$ be a model (based on a delta frame) such that $M \models \Gamma$. Then, by Lemma 2, we have $u(M) \models u(\Gamma)$. Now let $\Delta$ be the logic of the class of delta frames $\{ \emptyset \in \Delta F \mid \emptyset \models u(\Gamma) \}$. Because $\Delta$ is a normal modal logic, it is closed under proof rules, and hence it follows from the fact that $u(\Gamma) \subseteq \Delta$ that $u(\phi) \in \Delta$. It follows that $u(\phi)$ is valid on this class of frames, so we have $u(M) \models u(\phi)$, and by another application of Lemma 2, we get $M \models \phi$. Hence, $\Gamma \vdash_{\Delta F} \phi$. □ 5.3 Relative Completeness In Hoare logics relative completeness has been established for classes of models which allow the expressibility in the logic of weakest preconditions [1]. For example in [7] a class of arithmetical models has been introduced which allow expressibility in the logic of weakest preconditions by means of arithmetically based encoding techniques. Following this general approach to completeness of Hoare logics we want to identify a class of models for which the converse of the above proposition 3 holds. More specifically, we want to identify a set of models $M$ for which there exists an axiomatisation $\Gamma_M$ in $K\Delta$ such that $M \models \phi$ implies $u(\Gamma_M) \vdash_{K} u(\phi)$. Note that in our modal logic $K\Delta$ weakest preconditions of a delta $d$ and postcondition $\phi$ can be directly expressed by formulas of the form $[d] \phi$. A natural set of models to consider are those models which allow the expression of such weakest preconditions in terms of a logical combination of features themselves. **Definition 22 (Precondition Expressibility).** A model $M$ allows the expression in $K\Delta$ of weakest preconditions iff for every formula $[d] \phi$, where $d$ is a basic delta term and $\phi$ is a boolean combination of features, there exists a boolean combination of features $\phi'$ such that $$M, w \models [d] \phi \iff M, w \models \phi'.$$ For any model $\mathfrak{M}$ let $\Gamma_{\mathfrak{M}}$ denote the propositional theory of its underlying semantic feature model extended with the theory $\text{WP}(\mathfrak{M})$ defined by \[ \{ \lbrack d \rbrack \phi \leftrightarrow \phi' \mid \mathfrak{M} \models \lbrack d \rbrack \phi \leftrightarrow \phi' \}\n\] where $d$ is a basic delta term, and both $\phi$ and $\phi'$ are boolean combinations of features. We have the following relative completeness theorem. **Theorem 23.** For any model $\mathfrak{M}$ that allows the expression in $\mathbf{K\Delta}$ of weakest preconditions we have \[ \text{if } \mathfrak{M} \models \phi, \text{ then } u(\Gamma_{\mathfrak{M}}) \vdash_{\mathbf{K\Delta}} u(\phi)\n\] for every formula $\phi$. **Proof.** It is suffices to show that using the transformation function $t$ from Definition 13 the propositional theory $\Gamma_{\mathfrak{M}}$ allows to reduce every formula $\phi$ to a logical combination of features. The proof proceeds by a straightforward induction on $\phi$. $\square$ Note that for models that allow the expression of weakest preconditions our modal logic $\mathbf{K\Delta}$ is in fact a conservative extension of the propositional logic of the underlying semantic feature models. Of particular interest is also that for deterministic delta models we only need to require that every formula $\lbrack d \rbrack f$, where $d$ is a basic delta term and $f$ is a single feature, can be expressed by a boolean combination of features itself. ### 5.4 Example We now illustrate the use of $\mathbf{K\Delta}$ through an example proof. Say we have the feature model as shown in Figure 2. The features $f$, $g$, and $h$ are implemented by the delta model $DM$ in Figure 3. The feature $t$ is satisfied in some empty core product, on which we'd like to apply those deltas. We now introduce a set of basic axioms valid in this model: **Axiom 24** (Delta Model Axioms). 1. $f \rightarrow t$ 2. $g \rightarrow f$ 3. $h \rightarrow f$ 4. $g \rightarrow [c]g$ 5. $h \rightarrow [b]h$ 6. $t \rightarrow [a]f$ 7. $f \rightarrow [b]g$ 8. $f \rightarrow [c]h$ 9. $g \rightarrow [b]g$ 10. $h \rightarrow [bc]h$ Axioms 1, 2 and 3 are due to the feature model shown in Figure 2. It is generally the case that when a subfeature is implemented its superfeature is implemented as well. Axioms 4 and 5 are due to a property we assume the underlying deltoid to have, called non-interference [8], which states that commuting deltas cannot interfere with each others features. Axioms 6 to 10 are by design of the deltas $a$, $b$, $c$ and $bc$. We assume that they were developed such that $a$, $b$ and $c$ implement the features $f$, $g$ and $h$ (Axioms 6, 7 and 8), taking into account only the deltas ‘above’ them, and that conflict resolving delta $bc$ [4, 5] doesn’t break the features implemented by the previous deltas (Axioms 9 and 10). Now say we have a core product $c \in \mathbb{P}$ with $c \vdash t$. We’d like to prove the following property: **Proposition 25.** $c \models \lbrack DM \rbrack (t \land f \land g \land h)$ In order to prove this property more succinctly, we introduce the following auxiliary proof rules: **Lemma 26** (l3). For all formulas $\phi$, $\psi$ and $\chi$, and for all deltas $d_1, \ldots, d_n$, we have: \[ \phi \rightarrow [d_1] \cdots [d_n] \psi, \chi \models \phi \rightarrow [d_1] \cdots [d_n] \chi \] **Proof.** By induction on $n$. $\square$ **Lemma 27** (l4). For all formulas $\phi$ and $\psi$ and all deltas $d$, we have: \[ \vdash (\lbrack d \rbrack \phi \land [d] \psi) \rightarrow [d] (\phi \land \psi) \] **Proof.** See [2, Example 1.40]. $\square$ **Proof of Proposition 1.** 1. $\vdash a \rightarrow b \quad 11$ l3: 6, 7 2. $\vdash a \rightarrow [b] (\lbrack a] \land \lbrack b]) \quad 12$ l3: 11, 2 3. $\vdash a \rightarrow [b] (\lbrack a] \land \lbrack c]) \quad 13$ l3: 12, 8 4. $\vdash a \rightarrow [b] (\lbrack a] \land \lbrack c]) \quad 14$ l3: 13, 2 5. $\vdash a \rightarrow [b] (\lbrack c] \land \lbrack c]) \quad 15$ l3: 14, 4 6. $\vdash a \rightarrow [b] (\lbrack a] \land \lbrack b]) \quad 16$ l3: 15 7. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 17$ l3: 16, 9 8. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 18$ l3: 17, 10 9. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 19$ l3: 18 10. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 20$ l3: 19, 2 11. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 21$ l3: 20, 1 12. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 22$ l3: 21, $\Delta$ 13. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad 23$ l3: 22, $\Delta$ Formula (24) is derived in a symmetric manner to (23). 14. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad \Delta$ symmetric 15. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad I_\Delta : 23, 24$ 16. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad I_\Delta : 25$ 17. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad \Delta$ symmetric 18. $\vdash a \rightarrow [b] \lbrack [c] \land \lbrack c]) \quad \Delta$ symmetric Then, with $c \vdash \Box$, we have our result. $\square$ We have skipped many steps in this proof, mostly those concerned with invoking propositional tautologies and applying modus ponens. We have kept only the most interesting steps – those directly use our axioms. 5.5 Alternate Propositions In this section we have chosen the set of features $\mathcal{F}$ as the significant set of propositions. But there are several reasons for choosing an alternate or additional set of propositions. First, there may be some desired interaction between features that would not be satisfied by an implementation of any strict subset of those features. In that case, we’d want to have sets of features $\mathcal{P}(\mathcal{F}) \subseteq \Xi$ rather than individual features. We would then assume the additional axiom: $$\models F \cup G \implies \models F \land \models G$$ for some $F, G \subseteq \mathcal{F}$. This approach was taken in [8]. Furthermore, it is possible that different products may implement the exact same features. So we may want additional proposition letters to distinguish between them in our logic and reason on a somewhat lower level. Such proposition letters may include the presence of specific classes or methods in an object oriented setting. 6. CONCLUSION In this paper we provided a method that will be useful for further research into abstract delta modeling. The modal logic $\mathbf{K\Delta}$ forms the first bo allows us to reason more easily about the semantics of deltas and delta models in a way consistent with previous work. We prove strong completeness of the logic with respect to the class of all delta frames. We also discuss a proof technique on the level of models, prove its completeness and illustrate it through example. The delta theory in this paper is based on Abstract Delta Modeling [4, 5]. We remain in a similarly abstract setting, yet generalize even further by removing the assumption that deltas are deterministic and terminating entities. The logic and proof techniques in this paper will be useful for proving properties of the Delta Modeling Workflow [8, 9]. That was, in fact, partial motivation for the research in this paper. Completeness proofs in modal logic have a long-standing history, closely tied to the history of relational semantics based on Kripke frames. A comprehensive survey of this history can be found in e.g. [2, Section 1.8]. The modal logic presented in this paper has a flavour very reminiscent of dynamic logics such as PDL [6]. A crucial difference, however, is that the logic presented here is simpler (and hence, easier to work with) due to the absence of operations such as iteration or tests. Due to this simplicity, we can easily unravel complex modalities into simpler ones, and under certain conditions even reduce them to propositional formulas, enabling us to obtain the main results from Section 5. Possible future work following up the initial research in this paper may include work on characterizations of modal expressivity of basic properties of delta models and interactions between deltas, including positive as well as limitative results. In the case of limitative results, it may be worthwhile to look into the additional expressivity that the modal $\mu$-calculus has to offer [3]. This additional expressivity may, for example, be required to express the condition that a conflict between two deltas is resolved by a third delta. Another interesting research direction is the use of our logical framework in the synthesis of delta models using model checking techniques. 7. REFERENCES
{"Source-Url": "https://www.mimuw.edu.pl/~jwinter/articles/fmsple12.pdf", "len_cl100k_base": 9852, "olmocr-version": "0.1.48", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 32033, "total-output-tokens": 11390, "length": "2e13", "weborganizer": {"__label__adult": 0.000415802001953125, "__label__art_design": 0.0004630088806152344, "__label__crime_law": 0.0004558563232421875, "__label__education_jobs": 0.0008382797241210938, "__label__entertainment": 9.047985076904296e-05, "__label__fashion_beauty": 0.00018537044525146484, "__label__finance_business": 0.00033283233642578125, "__label__food_dining": 0.0006361007690429688, "__label__games": 0.0007724761962890625, "__label__hardware": 0.0007524490356445312, "__label__health": 0.0007419586181640625, "__label__history": 0.00027060508728027344, "__label__home_hobbies": 0.00013434886932373047, "__label__industrial": 0.0005588531494140625, "__label__literature": 0.00061798095703125, "__label__politics": 0.0003094673156738281, "__label__religion": 0.0005860328674316406, "__label__science_tech": 0.043365478515625, "__label__social_life": 0.00011748075485229492, "__label__software": 0.005565643310546875, "__label__software_dev": 0.94140625, "__label__sports_fitness": 0.0003032684326171875, "__label__transportation": 0.0007262229919433594, "__label__travel": 0.00021541118621826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38172, 0.02128]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38172, 0.2344]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38172, 0.83247]], "google_gemma-3-12b-it_contains_pii": [[0, 4291, false], [4291, 9911, null], [9911, 15194, null], [15194, 21201, null], [21201, 27177, null], [27177, 32617, null], [32617, 38172, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4291, true], [4291, 9911, null], [9911, 15194, null], [15194, 21201, null], [21201, 27177, null], [27177, 32617, null], [32617, 38172, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38172, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38172, null]], "pdf_page_numbers": [[0, 4291, 1], [4291, 9911, 2], [9911, 15194, 3], [15194, 21201, 4], [21201, 27177, 5], [27177, 32617, 6], [32617, 38172, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38172, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
a39f7948f4aaa30725d0275f03678c0ce3168604
This is the manual for Cluster 3.0. Cluster was originally written by Michael Eisen while at Stanford University. We have modified the $k$-means clustering algorithm in Cluster, and extended the algorithm for Self-Organizing Maps to include two-dimensional rectangular grids. The Euclidean distance and the city-block distance were added as new distance measures between gene expression data. The proprietary Numerical Recipes routines, which were used in the original version of Cluster/TreeView, have been replaced by open source software. Cluster 3.0 is available for Windows, Mac OS X, Linux, and Unix. November 5, 2002. Michiel de Hoon Human Genome Center, University of Tokyo. # Table of Contents 1 Introduction ...................................................... 2 2 Loading, filtering, and adjusting data ............ 3 2.1 Loading Data .................................................. 4 2.2 Filtering Data ................................................ 6 2.3 Adjusting Data ................................................ 7 2.3.1 Log transformation .................................... 8 2.3.2 Mean/Median Centering ............................... 8 2.3.3 Normalization ........................................... 9 3 Distance/Similarity measures ......................... 10 3.1 Distance measures based on the Pearson correlation........ 10 3.2 Non-parametric distance measures ....................... 11 3.3 Distance measures related to the Euclidean distance ....... 12 3.3.1 Euclidean distance ..................................... 12 3.3.2 City-block distance ................................. 12 3.4 Missing values .............................................. 12 3.5 Calculating the distance matrix ......................... 13 4 Clustering techniques ................................. 14 4.1 Hierarchical Clustering .................................... 14 4.1.1 Centroid Linkage Clustering ......................... 15 4.1.2 Single Linkage Clustering ............................ 16 4.1.3 Complete Linkage Clustering ......................... 16 4.1.4 Average Linkage Clustering ......................... 16 4.1.5 Weighting ............................................. 16 4.1.6 Ordering of Output File .............................. 17 4.1.7 Output Files .......................................... 18 4.2 The k-means Clustering Algorithm ..................... 20 4.3 Self-Organizing Maps ..................................... 22 4.4 Principal Component Analysis ......................... 23 5 Running Cluster 3.0 as a command line program ................................... 26 6 TreeView ..................................................... 28 7 Code Development Information ....................... 29 8 Bibliography .................................................. 30 1 Introduction Cluster and TreeView are programs that provide a computational and graphical environment for analyzing data from DNA microarray experiments, or other genomic datasets. The program Cluster can organize and analyze the data in a number of different ways. TreeView allows the organized data to be visualized and browsed. This manual is intended as a reference for using the software, and not as a comprehensive introduction to the methods employed. Many of the methods are drawn from standard statistical cluster analysis. There are excellent textbooks available on cluster analysis which are listed in the bibliography at the end. The bibliography also contains citations for recent publications in the biological sciences, especially genomics, that employ methods similar to those used here. 2 Loading, filtering, and adjusting data Data can be loaded into Cluster by choosing Load data file under the File menu. A number of options are provided for adjusting and filtering the data you have loaded. These functions are accessed via the Filter Data and Adjust Data tabs. 2.1 Loading Data The first step in using Cluster is to import data. Currently, Cluster only reads tab-delimited text files in a particular format, described below. Such tab-delimited text files can be created and exported in any standard spreadsheet program, such as Microsoft Excel. An example data file can be found under the File format help item in the Help menu. This contains all the information you need for making a Cluster input file. By convention, in Cluster input tables rows represent genes and columns represent samples or observations (e.g., a single microarray hybridization). For a simple timecourse, a minimal Cluster input file would look like this: <table> <thead> <tr> <th></th> <th></th> <th>0 minutes</th> <th>30 minutes</th> <th>1 hour</th> <th>2 hours</th> <th>4 hours</th> </tr> </thead> <tbody> <tr> <td>YORF</td> <td></td> <td>1</td> <td>1.3</td> <td>24</td> <td>5.8</td> <td>2.4</td> </tr> <tr> <td>YAL001C</td> <td></td> <td>0.9</td> <td>0.8</td> <td>0.7</td> <td>0.5</td> <td>0.2</td> </tr> <tr> <td>YAL002W</td> <td></td> <td>0.8</td> <td>2.1</td> <td>4.2</td> <td>10.1</td> <td>10.1</td> </tr> <tr> <td>YAL005C</td> <td></td> <td>1.1</td> <td>1.3</td> <td>0.8</td> <td>10.1</td> <td>10.1</td> </tr> <tr> <td>YAL015C</td> <td></td> <td>1.2</td> <td>1</td> <td>1.1</td> <td>4.5</td> <td>8.3</td> </tr> </tbody> </table> Each row (gene) has an identifier (in green) that always goes in the first column. Here we are using yeast open reading frame codes. Each column (sample) has a label (in blue) that is always in the first row; here the labels describe the time at which a sample was taken. The first column of the first row contains a special field (in red) that tells the program what kind of objects are in each row. In this case, YORF stands for yeast open reading frame. This field can be any alpha-numeric value. It is used in TreeView to specify how rows are linked to external websites. The remaining cells in the table contain data for the appropriate gene and sample. The 5.8 in row 2 column 4 means that the observed data value for gene YAL001C at 2 hours was 5.8. Missing values are acceptable and are designated by empty cells (e.g., YAL005C at 2 hours). It is possible to have additional information in the input file. A maximal Cluster input file would look like this: <table> <thead> <tr> <th></th> <th></th> <th>NAME</th> <th>CWGWEIGHT</th> <th>CORDER</th> <th>0</th> <th>30</th> <th>1</th> <th>2</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>YORF</td> <td></td> <td>TFIIC 133 KD SUBUNIT</td> <td>0.04</td> <td>5</td> <td>3</td> <td>2</td> <td>1</td> <td>1</td> <td></td> </tr> <tr> <td>YAL001C</td> <td></td> <td>TFIIC 133 KD SUBUNIT</td> <td>0.04</td> <td>2</td> <td>0.8</td> <td>2.1</td> <td>4.2</td> <td>10.1</td> <td>10.1</td> </tr> <tr> <td>YAL005C</td> <td></td> <td>CYTOSOLIC HSP70</td> <td>0.04</td> <td>1</td> <td>1.3</td> <td>0.8</td> <td>0.4</td> <td></td> <td></td> </tr> </tbody> </table> The yellow columns and rows are optional. By default, TreeView uses the ID in column 1 as a label for each gene. The NAME column allows you to specify a label for each gene that is distinct from the ID in column 1. The other rows and columns will be described later in this text. When Cluster 3.0 opens the data file, the number of columns in each row is checked. If a given row contains less or more columns than needed, an error message is displayed. Demo data A demo datafile, which will be used in all of the examples here, is available at http://rana.lbl.gov/downloads/data/demo.txt and is mirrored at http://bonsai.hgc.jp/~mdehoon/software/cluster/demo.txt. The datafile contains yeast gene expression data described in Eisen et al. (1998) [see references at end]. Download this data and load it into Cluster. Cluster will give you information about the loaded datafile. 2.2 Filtering Data The Filter Data tab allows you to remove genes that do not have certain desired properties from your dataset. The currently available properties that can be used to filter data are - **% Present >= X.** This removes all genes that have missing values in greater than \((100 - X)\) percent of the columns. - **SD (Gene Vector) >= X.** This removes all genes that have standard deviations of observed values less than X. - **At least X Observations with abs(Val) >= Y.** This removes all genes that do not have at least X observations with absolute values greater than Y. - **MaxVal-MinVal >= X.** This removes all genes whose maximum minus minimum values are less than X. These are fairly self-explanatory. When you press filter, the filters are not immediately applied to the dataset. You are first told how many genes would have passed the filter. If you want to accept the filter, you press Accept, otherwise no changes are made. 2.3 Adjusting Data From the Adjust Data tab, you can perform a number of operations that alter the underlying data in the imported table. These operations are - **Log Transform Data**: replace all data values \( x \) by \( \log_2 (x) \). - **Center genes [mean or median]**: Subtract the row-wise mean or median from the values in each row of data, so that the mean or median value of each row is 0. - **Center arrays [mean or median]**: Subtract the column-wise mean or median from the values in each column of data, so that the mean or median value of each column is 0. - **Normalize genes**: Multiply all values in each row of data by a scale factor \( S \) so that the sum of the squares of the values in each row is 1.0 (a separate \( S \) is computed for each row). • **Normalize arrays**: Multiply all values in each column of data by a scale factor $S$ so that the sum of the squares of the values in each column is 1.0 (a separate $S$ is computed for each column). These operations are not associative, so the order in which these operations are applied is very important, and you should consider it carefully before you apply these operations. The order of operations is (only checked operations are performed): - Log transform all values. - Center rows by subtracting the mean or median. - Normalize rows. - Center columns by subtracting the mean or median. - Normalize columns. ### 2.3.1 Log transformation The results of many DNA microarray experiments are fluorescent ratios. Ratio measurements are most naturally processed in log space. Consider an experiment where you are looking at gene expression over time, and the results are relative expression levels compared to time 0. Assume at timepoint 1, a gene is unchanged, at timepoint 2 it is up 2-fold and at timepoint three is down 2-fold relative to time 0. The raw ratio values are 1.0, 2.0 and 0.5. In most applications, you want to think of 2-fold up and 2-fold down as being the same magnitude of change, but in an opposite direction. In raw ratio space, however, the difference between timepoint 1 and 2 is 1.0, while between timepoint 1 and 3 is 0.5. Thus mathematical operations that use the difference between values would think that the 2-fold up change was twice as significant as the 2-fold down change. Usually, you do not want this. In log space (we use log base 2 for simplicity) the data points become 0, 1.0, -1.0. With these values, 2-fold up and 2-fold down are symmetric about 0. For most applications, we recommend you work in log space. ### 2.3.2 Mean/Median Centering Consider a now common experimental design where you are looking at a large number of tumor samples all compared to a common reference sample made from a collection of cell-lines. For each gene, you have a series of ratio values that are relative to the expression level of that gene in the reference sample. Since the reference sample really has nothing to do with your experiment, you want your analysis to be independent of the amount of a gene present in the reference sample. This is achieved by adjusting the values of each gene to reflect their variation from some property of the series of observed values such as the mean or median. This is what mean and/or median centering of genes does. Centering makes less sense in experiments where the reference sample is part of the experiment, as it is many timecourses. Centering the data for columns/arrays can also be used to remove certain types of biases. The results of many two-color fluorescent hybridization experiments are not corrected for systematic biases in ratios that are the result of differences in RNA amounts, labeling efficiency and image acquisition parameters. Such biases have the effect of multiplying ratios for all genes by a fixed scalar. Mean or median centering the data in log-space has the effect of correcting this bias, although it should be noted that an assumption is being made in correcting this bias, which is that the average gene in a given experiment is expected to have a ratio of 1.0 (or log-ratio of 0). In general, I recommend the use of median rather than mean centering, as it is more robust against outliers. 2.3.3 Normalization Normalization sets the magnitude (sum of the squares of the values) of a row/column vector to 1.0. Most of the distance metrics used by Cluster work with internally normalized data vectors, but the data are output as they were originally entered. If you want to output normalized vectors, you should select this option. A sample series of operations for raw data would be: - Adjust Cycle 1) log transform - Adjust Cycle 2) median center genes and arrays - repeat (2) five to ten times - Adjust Cycle 3) normalize genes and arrays - repeat (3) five to ten times This results in a log-transformed, median polished (i.e. all row-wise and column-wise median values are close to zero) and normal (i.e. all row and column magnitudes are close to 1.0) dataset. After performing these operations you should save the dataset. 3 Distance/Similarity measures The first choice that must be made is how similarity (or alternatively, distance) between gene expression data is to be defined. There are many ways to compute how similar two series of numbers are. Cluster provides eight options. 3.1 Distance measures based on the Pearson correlation The most commonly used similarity metrics are based on Pearson correlation. The Pearson correlation coefficient between any two series of numbers \( x = \{x_1, x_2, \ldots, x_n\} \) and \( y = \{y_1, y_2, \ldots, y_n\} \) is defined as \[ r = \frac{1}{n} \sum_{i=1}^{n} \left( \frac{x_i - \bar{x}}{\sigma_x} \right) \left( \frac{y_i - \bar{y}}{\sigma_y} \right), \] where \( \bar{x} \) is the average of values in \( x \), and \( \sigma_x \) is the standard deviation of these values. There are many ways of conceptualizing the correlation coefficient. If you were to make a scatterplot of the values of \( x \) against \( y \) (pairing \( x_1 \) with \( y_1 \), \( x_2 \) with \( y_2 \), etc), then \( r \) reports how well you can fit a line to the values. The simplest way to think about the correlation coefficient is to plot \( x \) and \( y \) as curves, with \( r \) telling you how similar the shapes of the two curves are. The Pearson correlation coefficient is always between -1 and 1, with 1 meaning that the two series are identical, 0 meaning they are completely uncorrelated, and -1 meaning they are perfect opposites. The correlation coefficient is invariant under linear transformation of the data. That is, if you multiply all the values in \( y \) by 2, or add 7 to all the values in \( y \), the correlation between \( x \) and \( y \) will be unchanged. Thus, two curves that have identical shape, but different magnitude, will still have a correlation of 1. Cluster actually uses four different flavors of the Pearson correlation. The textbook Pearson correlation coefficient, given by the formula above, is used if you select Correlation (centered) in the Similarity Metric dialog box. Correlation (uncentered) uses the following modified equations: \[ r = \frac{1}{n} \sum_{i=1}^{n} \left( \frac{x_i}{\sigma_x^{(0)}} \right) \left( \frac{y_i}{\sigma_y^{(0)}} \right), \] in which \[ \sigma_x^{(0)} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (x_i)^2}; \] \[ \sigma_y^{(0)} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i)^2}. \] This is basically the same function, except that it assumes the mean is 0, even when it is not. The difference is that, if you have two vectors \( x \) and \( y \) with identical shape, but which are offset relative to each other by a fixed value, they will have a standard Pearson correlation of 1. correlation (centered correlation) of 1 but will not have an uncentered correlation of 1. The uncentered correlation is equal to the cosine of the angle of two \( n \)-dimensional vectors \( x \) and \( y \), each representing a vector in \( n \)-dimensional space that passes through the origin. Cluster provides two similarity metrics that are the absolute value of these two correlation functions, which consider two items to be similar if they have opposite expression patterns; the standard correlation coefficients consider opposite genes to be very distant. ### 3.2 Non-parametric distance measures The Spearman rank correlation and Kendall’s \( \tau \) are two additional metrics, which are non-parametric versions of the Pearson correlation coefficient. These methods are more robust against outliers. The Spearman rank correlation calculates the correlation between the ranks of the data values in the two vectors. For example, if we have two data vectors \[ x = \{2.3, 6.7, 4.5, 20.8\}; \\ y = \{2.1, 5.9, 4.4, 4.2\}, \] then we first replace them by their ranks: \[ x = \{1, 3, 2, 4\}; \\ y = \{1, 4, 3, 2\}. \] Now we calculate the correlation coefficient in their usual manner from these data vectors, resulting in \[ r_{\text{Spearman}} = 0.4. \] In comparison, the regular Pearson correlation between these data is \( r = 0.2344 \). By replacing the data values by their ranks, we reduced the effect of the outlier 20.8 on the value of the correlation coefficient. The Spearman rank correlation can be used as a test statistic for independence between \( x \) and \( y \). For more information, see Conover (1980). Kendall’s \( \tau \) goes a step further by using only the relative ordering of \( x \) and \( y \) to calculate the correlation (Snedecor & Cochran). To calculate Kendall’s \( \tau \), consider all pairs of data points \((x_i, y_i)\) and \((x_j, y_j)\). We call a pair concordant if - \( x_i < x_j \) and \( y_i < y_j \); or - \( x_i > x_j \) and \( y_i > y_j \), and discordant if - \( x_i < x_j \) and \( y_i > y_j \); or - \( x_i > x_j \) and \( y_i < y_j \). We can represent this by a table: <table> <thead> <tr> <th></th> <th>(2.3, 2.1)</th> <th>(6.7, 5.9)</th> <th>(4.5, 4.4)</th> <th>(20.8, 4.2)</th> </tr> </thead> <tbody> <tr> <td>(2.3, 2.1)</td> <td>--</td> <td>&lt;&lt;</td> <td>&lt;&lt;</td> <td>&lt;&lt;</td> </tr> <tr> <td>(6.7, 5.9)</td> <td>&gt;&gt;</td> <td>--</td> <td>&gt;&gt;</td> <td>&lt;&gt;</td> </tr> <tr> <td>(4.5, 4.4)</td> <td>&gt;&gt;</td> <td>&lt;&lt;</td> <td>--</td> <td>&lt;&gt;</td> </tr> <tr> <td>(20.8, 4.2)</td> <td>&gt;&gt;</td> <td>&gt;&gt;</td> <td>&gt;&gt;</td> <td>--</td> </tr> </tbody> </table> From this table, we find that there are four concordant pairs and two discordant pairs: \[ n_c = 4; \] \[ n_d = 2; \] Kendall’s \( \tau \) is calculated as \[ \tau = \frac{N_c - N_d}{N(N - 1)/2}, \] which in this case evaluates as 0.33. In the C Clustering Library, the calculation of Kendall’s \( \tau \) is corrected for the possibility that two ranks are equal. As in case of the Spearman rank correlation, we may use Kendall’s \( \tau \) to test for independence between \( x \) and \( y \). ### 3.3 Distance measures related to the Euclidean distance #### 3.3.1 Euclidean distance A newly added distance function is the Euclidean distance, which is defined as \[ d(x, y) = \frac{1}{n} \sum_{i=1}^{n} (x_i - y_i)^2. \] The Euclidean distance takes the difference between two gene expression levels directly. It should therefore only be used for expression data that are suitably normalized, for example by converting the measured gene expression levels to log-ratios. In the sum, we only include terms for which both \( x_i \) and \( y_i \) are present, and divide by \( n \) accordingly. Unlike the correlation-based distance measures, the Euclidean distance takes the magnitude of changes in the gene expression levels into account. An example of the Euclidean distance applied to \( k \)-means clustering can be found in De Hoon, Imoto, and Miyano (2002). #### 3.3.2 City-block distance The city-block distance, alternatively known as the Manhattan distance, is related to the Euclidean distance. Whereas the Euclidean distance corresponds to the length of the shortest path between two points, the city-block distance is the sum of distances along each dimension: \[ d = \sum_{i=1}^{n} |x_i - y_i|. \] This is equal to the distance you would have to walk between two points in a city, where you have to walk along city blocks. The city-block distance is a metric, as it satisfies the triangle inequality. Again we only include terms for which both \( x_i \) and \( y_i \) are present, and divide by \( n \) accordingly. As for the Euclidean distance, the expression data are subtracted directly from each other, and we should therefore make sure that they are properly normalized. ### 3.4 Missing values When either \( x \) or \( y \) has missing values, only observations present for both \( x \) and \( y \) are used in computing similarities. 3.5 Calculating the distance matrix With any specified metric, the first step in the hierarchical clustering routines described below is to compute the distance (the opposite of similarity; for all correlation metrics distance = 1.0 - correlation) between all pairs of items to be clustered (e.g. the set of genes in the current dataset). This can often be time consuming, and, except for pairwise single-linkage clustering, memory intensive (the maximum amount of memory required is $4 \times N \times N$ bytes, where $N$ is the number of items being clustered). The algorithm for pairwise single-linkage hierarchical clustering is less memory-intensive (linear in $N$). 4 Clustering techniques The Cluster program provides several clustering algorithms. Hierarchical clustering methods organizes genes in a tree structure, based on their similarity. Four variants of hierarchical clustering are available in Cluster. In $k$-means clustering, genes are organized into $k$ clusters, where the number of clusters $k$ needs to be chosen in advance. Self-Organizing Maps create clusters of genes on a two-dimensional rectangular grid, where neighboring clusters are similar. For each of these methods, one of the eight different distance measures can be used. Finally, in Principal Component Analysis, clusters are organized based on the principal component axes of the distance matrix. 4.1 Hierarchical Clustering The Hierarchical Clustering tab allows you to perform hierarchical clustering on your data. This is a powerful and useful method for analyzing all sorts of large genomic datasets. Many published applications of this analysis are given in the references section at the end. Cluster currently performs four types of binary, agglomerative, hierarchical clustering. The basic idea is to assemble a set of items (genes or arrays) into a tree, where items are joined by very short branches if they are very similar to each other, and by increasingly longer branches as their similarity decreases. The first step in hierarchical clustering is to calculate the distance matrix between the gene expression data. Once this matrix of distances is computed, the clustering begins. Agglomerative hierarchical processing consists of repeated cycles where the two closest remaining items (those with the smallest distance) are joined by a node/branch of a tree, with the length of the branch set to the distance between the joined items. The two joined items are removed from list of items being processed and replaced by a item that represents the new branch. The distances between this new item and all other remaining items are computed, and the process is repeated until only one item remains. Note that once clustering commences, we are working with items that are true items (e.g. a single gene) and items that are pseudo-items that contain a number of true items. There are a variety of ways to compute distances when we are dealing with pseudo-items, and Cluster currently provides four choices, which are called centroid linkage, single linkage, complete linkage, and average linkage. Note that in older versions of Cluster, centroid linkage was referred to as average linkage. 4.1.1 Centroid Linkage Clustering If you click Centroid Linkage Clustering, a vector is assigned to each pseudo-item, and this vector is used to compute the distances between this pseudo-item and all remaining items or pseudo-items using the same similarity metric as was used to calculate the initial similarity matrix. The vector is the average of the vectors of all actual items (e.g. genes) contained within the pseudo-item. Thus, when a new branch of the tree is formed joining together a branch with 5 items and an actual item, the new pseudo-item is assigned a vector that is the average of the 6 vectors it contains, and not the average of the two joined items (note that missing values are not used in the average, and a pseudo-item can have a missing value if all of the items it contains are missing values in the corresponding row/column). Note that from a theoretical perspective, Centroid Linkage Clustering is peculiar if it is used in combination with one of the distance measures that are based on the Pearson correlation. For these distance measures, the data vectors are implicitly normalized when calculating the distance (for example, by subtracting the mean and dividing by the standard deviation when calculating the Pearson correlation. However, when two genes are joined and their centroid is calculated by averaging their data vectors, no normalization is applied. This may lead to the surprising result that distances may decrease when we go up in the tree representing the hierarchical clustering result. For example, consider this data set: <table> <thead> <tr> <th></th> <th>Exp1</th> <th>Exp2</th> <th>Exp3</th> <th>Exp4</th> </tr> </thead> <tbody> <tr> <td>Gene1</td> <td>0.96</td> <td>0.07</td> <td>0.97</td> <td>0.98</td> </tr> <tr> <td>Gene2</td> <td>0.50</td> <td>0.28</td> <td>0.29</td> <td>0.77</td> </tr> <tr> <td>Gene3</td> <td>0.08</td> <td>0.96</td> <td>0.51</td> <td>0.51</td> </tr> <tr> <td>Gene4</td> <td>0.14</td> <td>0.19</td> <td>0.41</td> <td>0.51</td> </tr> </tbody> </table> Performing pairwise centroid-linkage hierarchical clustering on this data set, using the Pearson distance as the distance measure, produces the clustering result: - Gene 1 joins Gene 2 at distance 0.47 - (Gene 1, Gene 2) joins Gene 4 at distance 0.46 - (Gene 1, Gene 2, Gene 4) joins Gene 3 at distance 1.62 This may result in ill-formed dendrograms. For an example, see the Java TreeView manual. A solution is to use the Euclidean or the city-block distance, or to use one of the other hierarchical clustering routines, which don’t suffer from this issue regardless of the distance measure being used. 4.1.2 Single Linkage Clustering In Single Linkage Clustering the distance between two items \(x\) and \(y\) is the minimum of all pairwise distances between items contained in \(x\) and \(y\). Unlike centroid linkage clustering, in single linkage clustering no further distances need to be calculated once the distance matrix is known. In Cluster 3.0, as of version 1.29 the implementation of single linkage clustering is based on the SLINK algorithm (see Sibson, 1973). Whereas this algorithm yields the exact same clustering result as conventional single-linkage hierarchical clustering, it is much faster and more memory-efficient (being linear in the memory requirements, compared to quadratic for the conventional algorithm). Hence, single-linkage hierarchical clustering can be used to cluster large gene expression data sets, for which centroid-, complete-, and average-linkage fail due to lack of memory. 4.1.3 Complete Linkage Clustering In Complete Linkage Clustering the distance between two items \(x\) and \(y\) is the maximum of all pairwise distances between items contained in \(x\) and \(y\). As in single linkage clustering, no other distances need to be calculated once the distance matrix is known. 4.1.4 Average Linkage Clustering In average linkage clustering, the distance between two items \(x\) and \(y\) is the mean of all pairwise distances between items contained in \(x\) and \(y\). 4.1.5 Weighting Weighting: By default, all of the observations for a given item are treated equally. In some cases you may want to give some observations more weight than others. For example, if you have duplicate copies of a gene on your array, you might want to downweight each individual copy when computing distances between arrays. You can specify weights using the ‘GWEIGHT’ (gene weight) and ‘EWEIGHT’ (experiment weight) parameters in the input file. By default all weights are set to 1.0. Thus, the actual formula, with weights included, for the Pearson correlation of \(x = \{x_1, x_2, \ldots, x_n\}\) and \(y = \{y_1, y_2, \ldots, y_n\}\) with observation weights of \(w = \{w_1, w_2, \ldots, w_n\}\) is \[ r = \frac{1}{\sum_{i=1}^{N} w_i} \sum_{i=1}^{N} w_i \left( \frac{X_i - \bar{X}}{\sigma_X} \right) \left( \frac{Y_i - \bar{Y}}{\sigma_Y} \right) \] Note that when you are clustering rows (genes), you are using column (array) weights. It is possible to compute weights as well based on a not entirely well understood function. If you want to compute weights for clustering genes, select the check box in the Genes panel. This will expose a Weight Options dialog box in the Arrays panel (I realize this placement is a bit counterintuitive, but it makes sense as you will see below). The idea behind the Calculate Weights option is to weight each row (the same idea applies to columns as well) based on the local density of row vectors in its vicinity, with a high density vicinity resulting in a low weight and a low density vicinity resulting in a higher weight. This is implemented by assigning a local density score \[ L(i) = \sum_{j \text{ with } d(i,j) < k} \left( \frac{k - d(i,j)}{k} \right)^n , \] where the cutoff \( k \) and the exponent \( n \) are user supplied parameters. The weight for each row is \( 1/L \). Note that \( L(i) \) is always at least 1, since \( d(i,i) = 0 \). Each other row that is within the distance \( k \) of row \( i \) increases \( L(i) \) and decreases the weight. The larger \( d(i,j) \), the less \( L(i) \) is increased. Values of \( n \) greater than 1 mean that the contribution to \( L(i) \) drops off rapidly as \( d(i,j) \) increases. ### 4.1.6 Ordering of Output File The result of a clustering run is a tree or pair of trees (one for genes one for arrays). However, to visualize the results in TreeView, it is necessary to use this tree to reorder the rows and/or columns in the initial datatable. Note that if you simply draw all of the node in the tree in the following manner, a natural ordering of items emerges: Thus, any tree can be used to generate an ordering. However, the ordering for any given tree is not unique. There is a family of $2^{N-1}$ ordering consistent with any tree of $N$ items; you can flip any node on the tree (exchange the bottom and top branches) and you will get a new ordering that is equally consistent with the tree. By default, when Cluster joins two items, it randomly places one item on the top branch and the other on the bottom branch. It is possible to guide this process to generate the best ordering consistent with a given tree. This is done by using the ‘GORDER’ (gene order) and ‘EORDER’ (experiment order) parameters in the input file, or by running a self-organizing map (see section below) prior to clustering. By default, Cluster sets the order parameter for each row/column to 1. When a new node is created, Cluster compares the order parameters of the two joined items, and places the item with the smaller order value on the top branch. The order parameter for a node is the average of the order parameters of its members. Thus, if you want the gene order produced by Cluster to be as close as possible (without violating the structure of the tree) to the order in your input file, you use the ‘GORDER’ column, and assign a value that increases for each row. Note that order parameters do not have to be unique. 4.1.7 Output Files Cluster writes up to three output files for each hierarchical clustering run. The root filename of each file is whatever text you enter into the Job Name dialog box. When you load a file, Job Name is set to the root filename of the input file. The three output files are JobName.cdt, JobName.gtr, JobName.atr. The .cdt (for clustered data table) file contains the original data with the rows and columns reordered based on the clustering result. It is the same format as the input files, except that an additional column and/or row is added if clustering is performed on genes and/or arrays. This additional column/row contains a unique identifier for each row/column that is linked to the description of the tree structure in the .gtr and .atr files. The .gtr (gene tree) and .atr (array tree) files are tab-delimited text files that report on the history of node joining in the gene or array clustering (note that these files are produced only when clustering is performed on the corresponding axis). When clustering begins each item to be clustered is assigned a unique identifier (e.g. ‘GENE1X’ or ‘ARRY42X’ — the ‘X’ is a relic from the days when this was written in Perl and substring searches were used). These identifiers are added to the .cdt file. As each node is generated, it receives a unique identifier as well, starting is ‘NODE1X’, ‘NODE2X’, etc. Each joining event is stored in the .gtr or .atr file as a row with the node identifier, the identifiers of the two joined elements, and the similarity score for the two joined elements. These files look like: ``` NODE1X GENE1X GENE4X 0.98 NODE2X GENE5X GENE2X 0.80 NODE3X NODE1X GENE3X 0.72 NODE4X NODE2X NODE3X 0.60 ``` The .gtr and/or .atr files are automatically read in TreeView when you open the corresponding .cdt file. 4.2 The $k$-means Clustering Algorithm The $k$-means clustering algorithm is a simple, but popular, form of cluster analysis. The basic idea is that you start with a collection of items (e.g. genes) and some chosen number of clusters ($k$) you want to find. The items are initially randomly assigned to a cluster. The $k$-means clustering proceeds by repeated application of a two-step process where 1. the mean vector for all items in each cluster is computed; 2. items are reassigned to the cluster whose center is closest to the item. Since the initial cluster assignment is random, different runs of the $k$-means clustering algorithm may not give the same final clustering solution. To deal with this, the $k$-means clustering algorithms is repeated many times, each time starting from a different initial clustering. The sum of distances within the clusters is used to compare different clustering solutions. The clustering solution with the smallest sum of within-cluster distances is saved. The number of runs that should be done depends on how difficult it is to find the optimal solution, which in turn depends on the number of genes involved. Cluster therefore shows in the status bar how many times the optimal solution has been found. If this number is one, there may be a clustering solution with an even lower sum of within-cluster distances. The $k$-means clustering algorithm should then be repeated with more trials. If the optimal solution is found many times, the solution that has been found is likely to have the lowest possible within-cluster sum of distances. We can then assume that the $k$-means clustering procedure has then found the overall optimal clustering solution. It should be noted that generally, the $k$-means clustering algorithm finds a clustering solution with a smaller within-cluster sum of distances than the hierarchical clustering techniques. The parameters that control $k$-means clustering are - the number of clusters ($k$); - the number of trials. The output is simply an assignment of items to a cluster. The implementation here simply rearranges the rows and/or columns based on which cluster they were assigned to. The output data file is $\text{JobName}_K\_GKg\_AKa.cdt$, where _GKg is included if genes were organized, and _AKa is included if arrays were organized. Here, Kg and Ka represent the number of clusters for gene clustering and array clustering, respectively. This file contains the gene expression data, organized by cluster by rearranging the rows and columns. In addition, the files $\text{JobName}_K\_GKg.kgg$ and $\text{JobName}_K\_AKa.kag$ are created, containing a list of genes/arrays and the cluster they were assigned to. Whereas $k$-means clustering as implemented in Cluster 3.0 allows any of the eight distance measures to be used, we recommend using the Euclidean distance or city-block distance instead of the distance measures based on the Pearson correlation, for the same reason as in case of pairwise centroid-linkage hierarchical clustering. The distance measures based on the Pearson correlation effectively normalize the data vectors when calculating the distance, whereas no normalization is used when calculating the cluster centroid. To use $k$-means clustering with a distance measure based on the Pearson correlation, it is better to first normalize the data appropriately (using the "Adjust Data" tab) before running the $k$-means algorithm. Cluster also implements a slight variation on $k$-means clustering, known as $k$-medians clustering, in which the median instead of the mean of items in a node are used. In a theoretical sense, it is best to use $k$-means with the Euclidean distance, and $k$-medians with the city-block distance. 4.3 Self-Organizing Maps Self-Organizing Maps (SOMs) is a method of cluster analysis that are somewhat related to \textit{k}-means clustering. SOMs were invented in by Teuvo Kohonen in the early 1980s, and have recently been used in genomic analysis (see Chu 1998, Tamayo 1999 and Golub 1999 in references). The Tamayo paper contains a simple explanation of the methods. A more detailed description is available in the book by Kohonen, Self-Organizing Maps, 1997. The current implementation varies slightly from that of Tamayo et al., in that it restricts the analysis one-dimensional SOMs along each axis, as opposed to a two-dimensional network. The one-dimensional SOM is used to reorder the elements on whichever axes are selected. The result is similar to the result of \textit{k}-means clustering, except that, unlike in \textit{k}-means clustering, the nodes in a SOM are ordered. This tends to result in a relatively smooth transition between groups. The options for SOMs are - whether or not you will organize each axis; - the number of nodes for each axis (the default is \(n^{1/4}\), where \(n\) is the number of items; the total number of clusters is then equal to the square root of the number of items); - the number of iterations to be run. The output file is of the form \textit{JobName\_SOM\_GXg-Yg\_AXa-Ya.txt}, where \(GXg-Yg\) is included if genes were organized, and \(AXg-Yg\) is included if arrays were organized. \(X\) and \(Y\) represent the dimensions of the corresponding SOM. Up to two additional files (.\textit{gnf} and .\textit{anf}) are written containing the vectors for the SOM nodes. In previous versions of Cluster, only one-dimensional SOMs were supported. The current version of the Cluster introduces two-dimensional SOMs. SOMs and hierarchical clustering: Our original use of SOMs (see Chu et al., 1998) was motivated by the desire to take advantage of the properties of both SOMs and hierarchical clustering. This was accomplished by first computing a one-dimensional SOM, and using the ordering from the SOM to guide the flipping of nodes in the hierarchical tree. In Cluster, after a SOM is run on a dataset, the GORDER and/or EORDER fields are set to the ordering from the SOM so that, for subsequent hierarchical clustering runs, the output ordering will come as close as possible to the ordering in the SOM without violating the structure of the tree. 4.4 Principal Component Analysis Principal Component Analysis (PCA) is a widely used technique for analyzing multivariate data. A practical example of applying Principal Component Analysis to gene expression data is presented by Yeung and Ruzzo (2001). In essence, PCA is a coordinate transformation in which each row in the data matrix is written as a linear sum over basis vectors called principal components, which are ordered and chosen such that each maximally explains the remaining variance in the data vectors. For example, an $n \times 3$ data matrix can be represented as an ellipsoidal cloud of $n$ points in three-dimensional space. The first principal component is the longest axis of the ellipsoid, the second principal component the second longest axis of the ellipsoid, and the third principal component is the shortest axis. Each row in the data matrix can be reconstructed The principal components can be found by calculating the eigenvectors of the covariance matrix of the data. The corresponding eigenvalues determine how much of the variance present in the data is explained by each principal component. Before applying PCA, typically the mean is subtracted from each column in the data matrix. In the example above, this effectively centers the ellipsoidal cloud around its centroid in 3D space, with the principal components describing the variation of points in the ellipsoidal cloud with respect to their centroid. In Cluster, you can apply PCA to the rows (genes) of the data matrix, or to the columns (microarrays) of the data matrix. In each case, the output consists of two files. When applying PCA to genes, the names of the output files are \textit{JobName\_pca\_gene.pc.txt} and \textit{JobName\_pca\_gene.coords.txt}, where the former contains the principal components, and the latter contains the coordinates of each row in the data matrix with respect to the principal components. When applying PCA to the columns in the data matrix, the respective file names are \textit{JobName\_pca\_array.pc.txt} and \textit{JobName\_pca\_array.coords.txt}. The original data matrix can be recovered from the principal components and the coordinates. As an example, consider this input file: \begin{verbatim} UNIQID EXP1 EXP2 EXP3 GENE1 3 4 -2 GENE2 4 1 -3 GENE3 1 -8 7 GENE4 -6 6 4 GENE5 0 -3 8 \end{verbatim} Applying PCA to the rows (genes) of the data in this input file generates a coordinate file containing: \begin{verbatim} UNIQID NAME GWEIGHT 13.513398 10.162987 2.025283 GENE1 GENE1 1.000000 6.280326 -2.404095 -0.760157 GENE2 GENE2 1.000000 4.720801 -4.995230 0.601424 GENE3 GENE3 1.000000 -8.755665 -2.117608 0.924161 GENE4 GENE4 1.000000 3.443490 8.133673 0.621082 GENE5 GENE5 1.000000 -5.688953 1.383261 -1.386509 \end{verbatim} where the first line shows the eigenvalues of the principal components, and a principal component file containing: \begin{verbatim} EIGVALUE EXP1 EXP2 EXP3 MEAN 0.400000 0.000000 2.800000 13.513398 0.045493 0.753594 -0.655764 10.162987 -0.756275 0.454867 0.470260 2.025283 -0.652670 -0.474545 -0.590617 \end{verbatim} with the eigenvalues of the principal components shown in the first column. From this principal component decomposition, we can regenerate the original data matrix as follows: \[ \begin{pmatrix} 6.280326 & -2.404095 & -0.760157 \\ 4.720801 & -4.995230 & 0.601424 \\ -8.755665 & -2.117608 & 0.924161 \\ 3.443490 & 8.133673 & 0.621082 \\ -5.688953 & 1.383261 & -1.386509 \end{pmatrix} \cdot \begin{pmatrix} 0.400000 & 0.000000 & 2.800000 \\ 0.400000 & 0.000000 & 2.800000 \\ 0.400000 & 0.000000 & 2.800000 \\ 0.400000 & 0.000000 & 2.800000 \end{pmatrix} = \begin{pmatrix} 3 & 4 & -2 \\ 4 & 1 & -3 \\ 1 & -8 & 7 \\ -6 & 6 & 4 \\ 0 & -3 & 8 \end{pmatrix} \] Note that the coordinate file `JobName_pca_gene.coords.txt` is a valid input file to Cluster 3.0. Hence, it can be loaded into Cluster 3.0 for further analysis, possibly after removing columns with low eigenvalues. 5 Running Cluster 3.0 as a command line program Cluster 3.0 can also be run as a command line program. This may be useful if you want to run Cluster 3.0 on a remote server, and also allows automatic processing of a large number of data files by running a batch script. Note, however, that the Python and Perl interfaces to the C Clustering Library may be better suited for this task, as they are more powerful than the command line program (see the manual for the C Clustering Library at http://bonsai.hgc.jp/~mdehoon/software/cluster/cluster.pdf). The GUI version of Cluster 3.0 can be used as a command line program by applying the appropriate command line parameters. You can also compile Cluster 3.0 without GUI support (if you will be using it from the command line only) by downloading the source code from http://bonsai.hgc.jp/~mdehoon/software/cluster, and running configure --without-x make make install The executable is called cluster. To run this program, execute cluster [options] in which the options consist of the following command line parameters: -f filename File loading -l Specifies to log-transform the data before clustering (default is no log-transform) -cg a|m Specifies whether to center each row (gene) in the data set: a: Subtract the mean of each row m: Subtract the median of each row (default is no centering) -ng Specifies to normalize each row (gene) in the data set (default is no normalization) -ca a|m Specifies whether to center each column (microarray) in the data set: a: Subtract the mean of each column m: Subtract the median of each column (default is no centering) -na Specifies to normalize each column (microarray) in the data set (default is no normalization) -u jobname Allows you to specify a different name for the output files (default is derived from the input file name) -g [0..9] Specifies the distance measure for gene clustering. 0 means no gene clustering; for the values 1 through 9, see below (default: 0) -e [0..9] Specifies the distance measure for microarray clustering. 0 means no microarray clustering; for the values 1 through 9, see below (default: 0) -m [msca] Specifies which hierarchical clustering method to use: m: Pairwise complete-(maximum-) linkage (default) s: Pairwise single-linkage c: Pairwise centroid-linkage a: Pairwise average-linkage -k number Specifies whether to run k-means clustering instead of hierarchical clustering, and the number of clusters k to use (default: 0, no k-means clustering) -pg Specifies to apply Principal Component Analysis to genes instead of clustering -pa Specifies to apply Principal Component Analysis to arrays instead of clustering -s Specifies to calculate an SOM instead of hierarchical clustering -x number Specifies the horizontal dimension of the SOM grid (default: 2) -y number Specifies the vertical dimension of the SOM grid (default: 1) -v, --version Display version information -h, --help Display help information For the command line options -g, -e, the following integers can be used to specify the distance measure: 0 No clustering 1 Uncentered correlation 2 Pearson correlation 3 Uncentered correlation, absolute value 4 Pearson correlation, absolute value 5 Spearman’s rank correlation 6 Kendall’s τ 7 Euclidean distance 8 City-block distance By default, no clustering is done, allowing you to use cluster for normalizing a data set only. 6 TreeView TreeView is a program that allows interactive graphical analysis of the results from Cluster. TreeView reads in matching *.cdt and *.gtr, *.atr, *.kgg, or *.kag files produced by Cluster. We recommend using the Java program Java TreeView, which is based on the original TreeView. Java TreeView was written by Alok Saldanha at Stanford University; it can be downloaded from http://jtreeview.sourceforge.net/. Java TreeView runs on Windows, Macintosh, Linux, and Unix computers, and can show both hierarchical and $k$-means results. 7 Code Development Information In previous versions of Cluster, the proprietary Numerical Recipes routines were heavily used. We have replaced these routines by the C clustering library, which was released under the Python License. Accordingly, the complete source code of Cluster is now open. It can be downloaded from http://bonsai.hgc.jp/~mdehoon/software/cluster. We used the GNU C compiler in order to enable anybody to compile the code. No commercial compiler is required. The GNU C compiler is available at http://www.gnu.org. There you can also find texinfo, which was used to generate the printed and the HTML documentation. To convert the picture files to EPS files for the printed documentation, we used pngtopnm and pnmtops of Netpbm, which can be found at http://netpbm.sourceforge.net. The HTML Help file was generated using the HTML Help Workshop, which is freely available at the Microsoft site (http://msdn.microsoft.com). The Windows Installer was created with the Inno Setup Compiler, which is available at http://www.innosetup.com. For Mac OS X, we used the Project Builder and the Interface Builder, which are part of the Mac OS X Development Tools. The prebuilt package was created with PackageMaker, which is also part of Mac OS X. The project files needed to recompile Cluster 3.0 are included in the source code. From the command prompt, Cluster 3.0 can be recompiled by running make from the mac subdirectory; this produces a universal binary for PowerPC and Intel processors. For Cluster 3.0 on Linux/Unix, we used the Motif libraries that are installed on most Linux/Unix computers. The include files are typically located in /usr/X11R6/include/Xm. You will need a version of Motif that is compliant with Motif 2.1, such as Open Motif (http://www.opengroup.org), which is available at http://www.motifzone.net. To improve the portability of the code, we made use of GNU’s automake and autoconf. The corresponding Makefile.am and configure.ac files are included in the source code distribution. 8 Bibliography
{"Source-Url": "http://bonsai.hgc.jp/~mdehoon/software/cluster/cluster3.pdf", "len_cl100k_base": 12535, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 71301, "total-output-tokens": 15156, "length": "2e13", "weborganizer": {"__label__adult": 0.0003342628479003906, "__label__art_design": 0.0006170272827148438, "__label__crime_law": 0.0003876686096191406, "__label__education_jobs": 0.002132415771484375, "__label__entertainment": 0.0001852512359619141, "__label__fashion_beauty": 0.00023424625396728516, "__label__finance_business": 0.0004580020904541016, "__label__food_dining": 0.00045609474182128906, "__label__games": 0.0008311271667480469, "__label__hardware": 0.0019664764404296875, "__label__health": 0.0011272430419921875, "__label__history": 0.0005373954772949219, "__label__home_hobbies": 0.000308990478515625, "__label__industrial": 0.0007948875427246094, "__label__literature": 0.0003285408020019531, "__label__politics": 0.0003371238708496094, "__label__religion": 0.0005664825439453125, "__label__science_tech": 0.41552734375, "__label__social_life": 0.00025534629821777344, "__label__software": 0.07000732421875, "__label__software_dev": 0.50146484375, "__label__sports_fitness": 0.0004169940948486328, "__label__transportation": 0.0004396438598632813, "__label__travel": 0.000278472900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52833, 0.0762]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52833, 0.68095]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52833, 0.85824]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 685, false], [685, 2920, null], [2920, 3728, null], [3728, 4008, null], [4008, 6731, null], [6731, 7330, null], [7330, 8207, null], [8207, 9062, null], [9062, 12354, null], [12354, 13304, null], [13304, 15969, null], [15969, 18514, null], [18514, 20885, null], [20885, 21558, null], [21558, 23160, null], [23160, 26476, null], [26476, 29035, null], [29035, 30410, null], [30410, 33137, null], [33137, 33656, null], [33656, 35359, null], [35359, 37398, null], [37398, 39021, null], [39021, 40695, null], [40695, 42896, null], [42896, 43768, null], [43768, 45895, null], [45895, 47165, null], [47165, 47708, null], [47708, 49735, null], [49735, 52695, null], [52695, 52833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 685, true], [685, 2920, null], [2920, 3728, null], [3728, 4008, null], [4008, 6731, null], [6731, 7330, null], [7330, 8207, null], [8207, 9062, null], [9062, 12354, null], [12354, 13304, null], [13304, 15969, null], [15969, 18514, null], [18514, 20885, null], [20885, 21558, null], [21558, 23160, null], [23160, 26476, null], [26476, 29035, null], [29035, 30410, null], [30410, 33137, null], [33137, 33656, null], [33656, 35359, null], [35359, 37398, null], [37398, 39021, null], [39021, 40695, null], [40695, 42896, null], [42896, 43768, null], [43768, 45895, null], [45895, 47165, null], [47165, 47708, null], [47708, 49735, null], [49735, 52695, null], [52695, 52833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52833, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 685, 3], [685, 2920, 4], [2920, 3728, 5], [3728, 4008, 6], [4008, 6731, 7], [6731, 7330, 8], [7330, 8207, 9], [8207, 9062, 10], [9062, 12354, 11], [12354, 13304, 12], [13304, 15969, 13], [15969, 18514, 14], [18514, 20885, 15], [20885, 21558, 16], [21558, 23160, 17], [23160, 26476, 18], [26476, 29035, 19], [29035, 30410, 20], [30410, 33137, 21], [33137, 33656, 22], [33656, 35359, 23], [35359, 37398, 24], [37398, 39021, 25], [39021, 40695, 26], [40695, 42896, 27], [42896, 43768, 28], [43768, 45895, 29], [45895, 47165, 30], [47165, 47708, 31], [47708, 49735, 32], [49735, 52695, 33], [52695, 52833, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52833, 0.06107]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
013e6bb7f292f082437792e273d879eb8ff99b68
The Development Process The Web technologies discussed in the previous chapter enable the development of Web applications ranging from small, ad hoc solutions to complex Web information systems. Before focusing on the actual development of such kinds of applications, i.e., the products, we would like to focus on the process that leads to the creation of a Web application. Understanding software development processes (in general, not only for Web applications), the main development activities that need to be performed, their interconnections, and their temporal order are of fundamental importance for the success of a software product. In this book we follow a top-down approach to the organization of the contents and discuss first the way (Web) application development is organized, so as to discuss then the single activities that we will identify in this chapter. Note that if we shift our point of view from that of the developer or project manager to that of the software product, what was before a software development process can now be seen as software life cycle, that is, a model that describes the life of an application, from its inception to its dismissal. Software development process and software life cycle are thus synonyms used in the literature depending on which view one prefers to highlight. As this is a book about Web engineering, we will use more the term software development process, though software life cycle may be used as well. In this chapter, we will first discuss software development and its processes that are generally executed for any software product, in order to introduce the reader to the basic concepts and activities. We will then describe a possible development process more specific to Web applications and discuss its differences with more traditional development processes. We will also introduce some examples of concrete Web development processes, in order to introduce the reader with the peculiarities of the Web. The rest of the book will then be structured according to a typical process model. 3.1 Decomposing the Software Development Process In today’s software industry it is hard to find products that are planned, implemented, and tested by a single developer, as the complexity of modern (Web) applications typically requires the involvement of several different experts who are able to address specific development requirements more precisely. Depending on the size of the application and the actors involved in the development process, building an application may be an intricate undertaking, exposed to a variety of risks that might compromise the success of the final application. In order to control the software development process, it is thus of fundamental importance to understand its constituent activities, its actors, and their interconnections. 3.1.1 Activities in Software Development Software development is a creative process leading to an innovative software product or system. Usually, this process is not just one monolithic block of work that takes as input some ideas about the application to be developed and produces as output a perfectly fitting solution; the process can be decomposed into a set of basic activities with well-defined boundaries and meanings. Such activities aim at understanding the problem, planning a solution, carrying out the plan, examining the result for accuracy, and resolving possible errors or inaccuracies. Traditionally, the software development process is organized into the following basic activities: - **Requirements engineering**: aims at understanding the problem. - **Design**: aims at planning a solution to the problem. - **Implementation**: translates the plan into running application code. - **Testing and evaluation**: aims at identifying coding errors or inconsistencies between the collected requirements and their implementation. - **Deployment**: brings the solution to the customers. - **Maintenance**: aims at monitoring a running system and keeping it healthy and running. - **Evolution**: aims at improving the developed solution over time, providing new input to the development process in the form of new requirements. More precisely, requirements engineering aims at understanding a product’s needed capabilities and attributes. The analysis concentrates on functional requirements, referring to the functions that the system must be able to support, as well as on nonfunctional requirements, referring mainly to the quality of the offered solution. This implies identifying the general idea behind the system, as well as the stakeholders that require the new solution, the motivations for the production of a new system and the final usage environment. The collected requirements are elaborated with the aim of producing some high-level models of the system that abstract from irrelevant details of the problem domain. After a subset of the application’s requirements has been understood, the design can follow. The design activity aims at specifying a solution, which must meet functional and efficiency requirements, as well as possible constraints derived from the target environment. Requirements previously collected are therefore refined, restricted, and enhanced to satisfy possible technological constraints. There are different views characterizing software design. For example, Pressman [Pre05] describes software design as a system of activities for data/class design, component design, interface design, and architectural design. Considering different separate views helps us shape better the specific aspects of the system, such as structure, behavior, interoperability, data, and control flow. It also enforces separation of concerns, a basic software engineering principle stating that approaching a problem by separating the different involved concerns may help us cope with complexity and achieve some required engineering quality factors such as adaptability, maintainability, extendibility, and reusability. During implementation, the different design views are transformed either manually or with the help of automatic generation tools into corresponding program code (structured into modules and/or files), database tables, and configuration files. Implementation may require the use of existing code libraries, a variety of different programming languages and communication protocols, and different hardware devices. The testing and evaluation activity is typically conducted in parallel with the previous activities, because the correctness and reliability of intermediate results – not only of the final product – is of fundamental importance to guarantee the quality of an application. The most relevant quality concerns addressed by this activity are related to functionality (i.e., the correctness of the application behavior with respect to specified functional requirements), performance (i.e., the throughput and response times of the application in average and peak workload conditions), and usability (i.e., ease of use, communication effectiveness, and adherence to consolidated usage standards). The deployment of a ready application delivers the developed application to its users. Depending on the nature of the application, this activity may imply the installation of the software on client PCs, the setup of central application and database servers, the configuration of communication middleware, and so on. Closely related with the deployment of a new software solution is the instruction and training of the future users of the application, especially in cases where the delivered solution represents a radical change rather than an incremental one. Maintaining a deployed and running application means keeping the application in a healthy state, so as to guarantee high availability and to reduce failures. This may imply periodical checks of log files, bug reporting, and the cleaning up of temporary files, as well as the application of bug fixes or security patches, in order to keep the application always up to date. Finally, *evolution* of an application aims at addressing new requirements that typically only emerge once the application has been used for a certain amount of time, and users start providing their feedback and comments. Evolving an existing application is more than bug or error fixing, and addressing the new requirements may require the whole development process to start anew in order to apply the required changes to the application. In addition – despite the rigorous application of software engineering techniques – oftentimes only after the deployment of the application does it become clear that certain requirements have not been met, and the application needs to be adjusted accordingly. ### 3.1.2 Actors in Software Development As already hinted at in the introductory paragraph of this section, usually the above activities are not performed by one and the same person. Instead, the software engineering experience has led to the definition of a set of professional profiles, each of which dedicated to specific problems or activities in the software development process: - During requirements engineering, the *application analyst* collects the motivations that trigger the development of the application and turns them into a specification of the application requirements. In doing so, he interprets the long-term strategic business goals and constraints and transforms them into short-term, concrete, application requirements. - In application design, the *data architect* focuses on those application requirements that deal with content and domain data. He produces a conceptual data model that organizes the data into a structure and a representation that can be accessed and used by the application. - The *application architect* focuses on those application requirements that deal with the functions and services that are to be delivered. He develops a conceptual solution of the application logic (expressed by means of models, figures, or specification languages) that builds on top of the data model. - Based on the specifications produced, the *programmer* or *developer* implements the solutions sketched by the data and application architects and tests and evaluates the implemented solutions. In most cases, the programmer also manages the deployment of the application. - The application *administrator* is then the main actor in the deployment and evolution activities, being in charge of maintaining the application, providing for periodical backups, managing the community of users, and collecting feedback from the users. Of course, the overall development process also involves the actual *users* of the application, especially in the evaluation of the usability of the application and its evolution over time. But users themselves are not actively involved in the production of the software artifact, the reason we do not list them as main actors in the development process. 3.2 Structuring the Software Development Process The decomposition of the software development process into its basic activities and the identification of its main actors is a first step toward the successful management of the development process. A successful management, however, also demands some additional knowledge, i.e., the order of the activities and possible transition criteria [Boe88]. It is the structuring of the software development process into well-formalized process models, starting from the previously identified activities, which enables the easy definition of a suitable order and of intermediate results and milestones [GJM02]. 3.2.1 The Waterfall Model One of the first explicit formalizations of a development process is the so-called Waterfall model. The Waterfall model suggests a sequential organization of the development activities. Only completing one activity allows starting its successor activity. The completion of an activity is typically associated with the delivery of a product, e.g., documentation or program code; therefore, the Waterfall model is oftentimes regarded as a document-driven process model [Boe88]. The Waterfall model was probably the first popular process model, and it is still widely adopted in many development situations today. Its main shortcoming is its inflexibility in adapting already completed activities in response to changing requirements or knowledge emerging in later stages of the development process. Also, bad design decisions that are taken early in the process are propagated unchanged to subsequent activities, i.e., in a strict Waterfall model it is difficult to undertake retroactive actions to fix errors made in already completed activities. A variance of the Waterfall model, which has been introduced to address this shortcoming, is the Waterfall model with feedback. It keeps the sequential order of activities, but also allows backward communication (e.g., from the implementation activity to the design activity) in order to accommodate changes that impact previous activities. 3.2.2 The Spiral Model As time passed, it became increasingly more evident that the simple sequential order of the Waterfall model does not suffice to describe the real situation of many large software projects. Indeed, in most cases several of the constituent activities of the process model may need to be repeated two or more times, which is in clear contrast with the sequence imposed by the Waterfall model. As an answer to the growing practice to iterate several times over the same activities, in 1988 Boehm [Boe88] proposed the so-called Spiral model, an incremental development process model that pays special attention to risk management. The Spiral model is graphically shown in Figure 3.1. The model explicitly suggests developing a software project in an incremental and iterative fashion by means of four convolutions, each one aimed at solving a specific development subproblem. Each convolution results in a prototype documenting the achievements of the respective convolution, accompanied by a risk analysis. The risk analysis considers various alternatives for achieving the project objectives, highlighting possible risks and their relevance, and suggesting solutions for preventing or eliminating such risks. The model is based on the idea that the incremental development of different versions of prototype applications implicitly reduces risk. The Spiral model may also be interpreted as a metamodel that is able to accommodate different development models, adding risk as new dimension to the management problem [GJM02]. 3.2.3 The Unified Model Over time, the incremental or iterative practice of the Spiral model has inspired several other process models. One prominent example is the *Unified Software Development Process* (Unified process [JBR99]) and its adaptation to the development of Web applications [Con99] and Catalysis [DW98]. ![Diagram of Unified Process Model](image) **Fig. 3.2.** Phases, workflows, and iterations in the Unified Process Model [JBR99] According to the Unified process [JBR99], a software product is built along several cycles; Figure 3.2 shows a high-level view of the process. Each of the cycles ends with a product release and is executed in four separate phases: - **Inception:** in this phase, the general idea of the system along with a tentative architecture and a set of critical use cases is developed. - **Elaboration:** in this phase, several architectural views of requirements and design models are created and the most critical use cases are realized. At the end of the phase the project manager should be able to justify the resources to be allocated for the software project and to claim that the project is feasible under the identified risks and the granted budget and human resources. - **Construction:** in this phase, the complete product is developed, and all the requested use cases are realized. Minor changes to the architecture are allowed if developers uncover better solutions. At the end of the phase, a product is transferable to the users. - **Transition:** in this phase, a small group of experienced users tests the product and suggests improvements and discovers defects and shortcomings. The phase also involves personnel training and support. Finally, the product is exposed to the full user community. Each phase is further divided into several iterations. The phases are executed according to known workflows: requirements, analysis, design, implementation, and test. Each workflow produces several artifacts. The adopted analysis and modeling techniques are those suggested by the Unified Modeling Language (UML) [Gro00]. 3.2.4 Other Models Recent practices influenced by agile development approaches, like extreme programming [Bec00], do not prescribe which activities should be executed and in which order. Organizations and particular projects may adopt just some of the above-mentioned processes, activities, phases, or workflows according to the needs of the projects. However, in order to be able to learn from projects, the adopted processes should be well defined. Well-defined, controlled, measured, and repeatable processes help organizations to continuously evaluate and improve software project management. CMM and SPICE are reference models for organizing processes. CMM defines five levels of organizations with respect to the maturity of their software processes: initial, repeatable, defined, managed, optimized. The levels are characterized by the way in which software processes are defined, measured, controlled, and improved based on feedback. They are also characterized by the software development processes that are considered and standardized in the organization. The processes then are based on the activities described above. In addition, some other product management activities, like planning, risk identification, configuration management, contract management, project tracking, and process management (including peer reviews, training, quality management, process change management, and defect prevention), may also be involved. For further details on these, the reader is referred to books such as [Hum89, IPW+95, FC99] and reports such as [PCCW93, EG96]. 3.3 Web-Specific Software Development Processes Web applications are a special instance of generic software applications, and, hence, Web engineering can be seen as special instance of software engineering. Developing applications for the Web implies adhering to a few well-defined rules or conventions, which provide for a stable, robust, and scalable development and execution framework. Taking into account such Web-specific peculiarities allows a better tailoring of development process models. In the following, we introduce a characteristic process model, the so-called online evolution model, which stems from our experience in the development of Web applications and from the simple observation of the life cycle of modern Web applications that are available on the Web. 3.3.1 The Online Evolution Model Figure 3.3 graphically shows the structure of the online evolution model. The model consists of five main activities, i.e., requirements analysis, design, implementation, testing and evaluation, and maintenance and evolution, and of seven transitions among the activities. The coarse activities in the online evolution model very much resemble the traditional activities in the software development process. A main difference is the interpretation of the deployment as transition, and not as a first-class activity. In the domain of the Web, deploying an application to its users is indeed not a big deal, as the centralized architecture typical of Web applications, the absence of independent client-side application code, and the browser as execution environment greatly facilitate and speed up the deployment activity. As for the transitions, the model proposes an explicit connection from the maintenance and evolution activity to the requirements analysis activity. It is this transition that characterizes the model: Connecting the maintenance and evolution activity to the requirements analysis activity closes a second cycle in the model that involves the requirements analysis activity; we call this the evolution cycle. The first cycle is the one that spans the design, implementation, and testing and evaluation activities; we call this the build and test cycle. The two cycles correspond to two phases that are peculiar to modern Web applications: offline development and online evolution. Indeed, as highlighted in Figure 3.3, the build and test cycle refers to the incremental development of the application that will go online, while the evolution cycle refers to the incremental evolution which the application undergoes over time once it is online. In general, the two cycles are characterized by different cycling times: the former faster, the latter slower. The two iterative and incremental cycles appear particularly appropriate in the context of the Web, where applications must be deployed quickly (in “Internet time”), and requirements are likely to change during the development phase. As a matter of fact, increasingly the common practice in Web development is to substitute documentation artifacts (as heavily adopted in the Waterfall model) with real application prototypes, and to involve end users as early as possible for testing and evaluation. Also, while in traditional software engineering an application is released only once all the requirements have been met, for Web applications it is more and more common practice (and desirable) to publish online applications, even though not all the requirements have been met yet. Early user feedback is becoming more important, and evolution is finally being seen as a real opportunity for improvement and less as an annoying adaptation of a functioning application. The evolution cycle is thus increasingly gaining importance. As an example of this trend, consider the Google search engine’s Web site. Although users might not always be conscious of simple changes, Google is continuously evolving the features and interface of the search engine. According to Adam Bosworth’s (Vice President at Google) keynote speech at the 2007 International Conference on Web Engineering, Google is constantly adding new features to its Web applications, measuring whether the expected improvements or user behaviors can be accommodated, and consolidating features that prove their viability. Think, for instance, of the Web site thumbnails accompanying the single search results that were added some time ago to enrich the user’s browsing experience, but then dropped because of little value to users who were not yet familiar with those sites. A successful evolution, instead, was the introduction of the suggestion to switch to Google Maps for user inquiries that contain location data. The effect of the two development cycles in the online evolution model on the maturity of the actual Web application under development is schematically represented in Figure 3.4. The incremental releases of the offline prototype in the build and test cycle occur rapidly, and the maturity of the application increases in big steps. After the deployment of the application, the incremental upgrades of the online application occur less frequently (evolution cycles), and they add less value to the application. However, they add value, continuously, and are an integral part of the application’s life cycle. The online evolution model discussed here is not intended to prescribe any rigid development process for Web applications. Rather, it describes the product life cycle of modern Web applications, as can be determined by observing the online dynamics of such kinds of applications. As such, the online evolution model may also be the result of the application of, for example, the Unified process to Web applications. In that case, however, each instance of the evolution cycle would result in a new instantiation of the development process. It is worth noting that taking into account the peculiarities of the Web also allows us to further refine the core development activities of Web applications, adding a second layer of detail to the development process model. Web applications share some architectural, technological, and usage characteristics that allow us to further separate the previously discussed development activities into smaller concerns. For instance, the design activity of Web applications can typically be separated into data design, navigation design, and presentation design. At this stage, we will not deepen the structuring of the above-described Web development activities into their subactivities. The following chapters, however, will discuss the single activities of the development process by providing some more insight into Web-specific peculiarities. ### 3.3.2 Web-Specific Actors Independently of the previous analysis of the main development activities, we can say that Web application development involves the actors already discussed in Section 3.1.2, with two additional roles that are integer to the Web: the graphic designer and the webmaster. Graphic designers are in charge of presentation design. The graphical appearance of Web applications is very important for both usability considerations and the attractiveness of the application. Graphic designers conceive the graphical appearance of the application, structure contents and images into layouts, and select suitable style properties (e.g., fonts, colors, and the size of images) based on the nonfunctional requirements dealing with the customer’s graphical corporate identity and with acknowledged communication standards. The strong separation of concerns applied to the design activity demands only little, if any, programming skills from graphic designers, which fully enables them to work with the software and design tools they are used --- 1 We will come back to this consideration in Section 7.1 when discussing some presentation implementation issues. to and to integrate their sketches with little effort. Especially, the use of so-called mock-ups (graphical interface prototypes that do not yet support any real application feature) is very popular for discussing appearance characteristics with the customer and to get early user feedback. Webmasters are in charge of the maintenance and partly also the evolution of a Web application. Typically, each Web application that is online offers somewhere (e.g., in the contacts page or in the footer of the pages) the possibility to contact a person (the webmaster) to communicate, for example, broken links or other problems with the application. The role of the webmaster is common practice today, and it is new in the software development scenario, as there is no such role in traditional software development processes. Considering that a large part of the applications developed for the Web can be categorized as content management systems or data-intensive Web applications, we can distinguish two more actors, not directly involved in the development of the application itself but rather focusing on the contents of the application: the content author and the content manager. The content author creates new content (e.g., news articles, documentations, photos, blog entries, etc.) to be added to and published by the application. The content manager is responsible for content aggregation, content evaluation, quality assurance, and the final publishing. 3.4 Examples of Web-Specific Development Processes The previous discussion introduced the reader to the online evolution model. In this section, we describe a few Web-specific application development models that can to some extent be accommodated by the online evolution model. The models refer to three well-known conceptual Web application development methods, i.e., WebML [CFB02], WSDM [TL98], and OOHDM [SR98]. They will be explained in more detail in Chapter 5 when discussing the design phase in the development process. 3.4.1 The WebML Model The Web Modeling Language (WebML) [CFB02] is a visual language and development method for specifying the content structure of a Web application and the organization and presentation of contents in the form of hypertexts. The WebML method was proposed in 2000 [CFB00a] and then refined until it ensured a complete coverage of the development process [BCC03], thanks to the availability of generative techniques supporting the automatic production of the application code. The main contribution of WebML is the proposal of a mix of concepts, notations, and techniques for the construction of data-intensive Web applications, which can be used by Web development teams to support all the activities of the application life cycle, from analysis to deployment and evolution. The proposed mix blends traditional ingredients well known to developers, such as Use Case specification with UML and conceptual data design with the Entity-Relationship model, with new concepts and methods for the design of hypertexts, which are central to Web development. Therefore, the value of the proposed approach is not in the individual ingredients, but in the definition of a systematic framework in which the activities of Web application development can be organized according to the fundamental principles of software engineering, and all tasks, including the more Web-centric ones, find adequate support in appropriate concepts, notations, and techniques. ![Diagram of Phases in the WebML development process](image-url) **Fig. 3.5.** Phases in the WebML development process [CFB+02] Figure 3.5 shows the WebML approach to the development of Web applications. Inspired by Boehm’s Spiral model, and in line with modern methods for Web and software application development [Con99, JBR99], the WebML process is applied in an iterative and incremental manner, in which the various phases are repeated and refined until results meet the application requirements. The product life cycle therefore undergoes several iterations, each one producing a prototype or a partial version of the application. At each iteration, the current version of the application is tested and evaluated, and then extended or modified to cope with the already collected requirements, as well as with newly emerged requirements. Out of the entire process illustrated in Figure 3.5, the “upper” phases of analysis and design are those most influenced by the adoption of a conceptual model. The WebML method therefore focuses on them. However, as shown in the rest of this section, the adoption of a model also benefits the other phases. 3 The Development Process Requirements analysis In WebML, the requirements analysis phase aims at producing the following results: - The identification of the *groups of users* addressed by the application. Each group represents users having the same profile, or performing the same activities with the same access rights over the same information classes. - The specification of *functional requirements* that address the functions to be provided to users. For each group of users, the relevant activities to be performed are identified and specified; each activity is a cohesive set of elementary tasks. - The identification of *core information objects*, i.e., the main information assets to be accessed and/or manipulated by users. - The decomposition of the Web application into *site views*, i.e., different hypertexts designed to meet a well-defined set of functional and user requirements. Each user group will be provided with at least one site view supporting the functions identified for the group. The WebML method does not prescribe any specific format for requirements specification. However, table formats are suggested for capturing the informal requirements (such as the group description table or the site views description table). UML use case diagrams and activity diagrams can be also used as standard representations of usage scenarios. Application design Application design is achieved by means of WebML-based conceptual schemas, which express the organization of the application domain and navigation components at a high level of abstraction, independently of implementation details. According to Figure 3.5, application design involves two activities: - **Data Design**: corresponds to organizing core information objects identified during requirements analysis into a comprehensive and coherent data schema. - **Hypertext Design**: produces site view schemas on top of the data schema previously defined. The distinguishing feature of the WebML approach is the emphasis on conceptual modeling for hypertext specification. The models provided by the WebML language for data and hypertext design will be better described in Chapter 5. Implementation The WebRatio CASE tool [CFB02, Web07b] largely assists designers in the implementation of the database and of the Web application. First of all, it offers a visual environment for drawing the data and hypertext conceptual 3.4 Examples of Web-Specific Development Processes schemas. Such visual specifications are then stored as XML documents, and these are the inputs for the WebML code generator, which supports data and hypertext implementation. In Section 7.2.2 of this book we will come back to the WebRatio CASE tool and provide a brief discussion of its features. Testing and evaluation The WebML model-driven approach benefits the systematic testing of applications, thanks to the availability of the conceptual model and the model transformation approach to code generation [BFTM05]. With respect to the traditional testing of applications, the focus shifts from verifying individual Web applications to assessing the correctness of the code generator. The intuition is that if one could ensure that the code generator produces a correct implementation for all legal and meaningful conceptual schemas (i.e., combinations of modeling constructs), then testing Web applications would reduce to the more treatable problem of validating the conceptual schema. WebML development also fosters innovative techniques for quality evaluation. The research in this area has led to a framework for the model-driven and automatic evaluation of Web application quality [FLMM04, LMM04, MLME04]. The framework supports the static (i.e., compile-time) analysis of conceptual schemas, and the dynamic (i.e., runtime) collection of Web usage data to be automatically analyzed and compared with the navigation dictated by the conceptual schema. The static analysis is based on the discovery in the conceptual schema of design patterns, and on their automatic evaluation against quality attributes encoded as rules. Conversely, usage analysis consists of the automatic examination and mining of enriched Web logs, called conceptual logs [FMM03], which correlate common HTTP logs with additional data about i) the units and link paths accessed by the users, and ii) the database objects published within the viewed pages. Maintenance and evolution In the WebML model-driven process maintenance and evolution also benefit from the existence of a conceptual model of the application. Requests for changes can in fact be turned into changes at the conceptual level, either to the data model or to the hypertext model. Then, changes at the conceptual level are propagated to the implementation. This approach smoothly incorporates change management into the mainstream production life cycle and greatly reduces the risk of breaking the software engineering process due to the application of changes solely at the implementation level. 3.4.2 WSDM The Web Site Design Method\(^2\) was initiated by De Troyer and Leune [TL98] in 1998, and therefore was one of the first Web design methods. \(^2\) Later re-baptized as Web Semantics Design Methods. Although WSDM was originally aimed at creating kiosk Web sites, it steadily evolved to a complete (semantic) Web design method supporting both functionality and a wide range of additional design concerns (localization, accessibility, semantic annotations, adaptivity, etc.). Some of these issues will be discussed in more detail later in this book.\(^3\) WSDM is a multi-phase Web design method, where each phase focuses on one particular design concern. It possesses the following characteristic features: - **Methodology**: more than other methods, WSDM is a methodology. In addition to offering explicitly defined design primitives and models to describe a Web application at different levels of abstraction, WSDM also offers the designer aid on how to obtain the instantiations of these different models in order to obtain a well-structured, consistent, and usable Web application. WSDM thus offers the designer guidelines and techniques, thereby providing an explicit and systematic way to define Web applications. - **Audience-driven**: consistent with knowledge established from user interface design and usability research, WSDM recognizes the importance of the users and thus takes as an explicit starting point an analysis of the different kinds of users (called audiences) and their (different) requirements and characteristics. This analysis will subsequently steer the impending design. Such an approach, where the users are taken as a starting point for the further design, is called an *audience-driven* approach. - **Semantic Web technology**: with the rise of the Semantic Web, WSDM has been transformed to take advantage of Semantic Web technology. This was done in two ways. First of all, the Semantic Web Ontology Language OWL was used internally to define the different WSDM design models and for describing the information and functionality present in the Web application. Secondly, WSDM was also extended to generate semantic annotations alongside the (traditional) Web application, thereby effectively enabling the Semantic Web. Figure 3.6 shows the different WSDM phases, along with the design models each gives rise to. As already explained, all these models are expressed in the Semantic Web Ontology Language OWL. Together they form the WSDM Ontology. The next paragraphs explain in more detail each of WSDM’s design phases: *Mission statement* The specification of the mission statement is the first phase of the WSDM design process. The intention is to clearly set the boundaries for the design by identifying the purpose of the Web site, the topics, and the target users. The mission statement is used during the design to ensure all required information \(^3\) Interested readers can read all about WSDM on [http://wsdm.vub.ac.be/](http://wsdm.vub.ac.be/). and functionality are present, and all targeted users are supported. After the design process it is used to verify whether the formulated goals set for the Web application have been fulfilled. The mission statement is formulated in natural language. **Audience modeling** During the audience modeling phase, WSDM takes into account the fact that different visitors may have different needs and goals, and thus require particular support more specifically tailored to their needs. During the *audience classification* subphase, the targeted visitors, who were informally identified in the mission statement, are refined and classified into audience classes. An audience class is a group of visitors that has the same information and functional requirements. Any audience class that has (the same or) some additional requirements compared to another audience class is called an audience subclass. This partial order relationship gives rise to a hierarchical structure, called the audience class hierarchy. The “visitor” audience class is always the top of this hierarchy. It represents the requirements that all visitors have in common. During the *audience characterization* subphase, for each audience class, the characteristics, navigation, and usability requirements for their members are also formulated. These will be taken into account in the subsequent design phases. **Conceptual design** The conceptual design phase is split into two subphases: *task and information modeling* and *navigation design*. During task and information modeling, the designer models the tasks that need to be performed by the different audience classes, along with the required content and functionality. For each requirement that was formulated during audience modeling, a task model is defined. Each task model consists of a decomposition of the task needed to fulfill the particular requirement into elementary tasks, along with the temporal relations between them (e.g., sequential, order-independent). To perform this task analysis, WSDM uses a slightly modified version of Concurrent Task Trees (CTTs) ([Pat00] for CTTs, [TC03] for WSDM modifications to CTTs). Subsequently, for each elementary task a so-called object chunk is created, which describes exactly what information and/or functionality is required to perform this elementary task. WSDM uses OWL (see e.g. [CPT06]) to formally describe these object chunks. During navigation design, the (conceptual) navigation structure is modeled in an implementation-independent way. It indicates the general organization structure of the Web application, i.e., how the different visitors will be able to navigate through the site. In WSDM, the basic navigation structure is based on the audience class hierarchy: for each audience class, a navigation track is constructed. Such a navigation track can be considered as a sub-site, containing all and only information and functionality needed for this audience class. The internal navigation structure within a navigation track is based on the (temporal relations in the) task models. The navigation model consists of three basic modeling elements: conceptual navigation nodes indicating units of navigation, links between these navigation nodes, and the object chunks which are connected to the navigation nodes. **Implementation design** During the implementation design, the conceptual models are complemented with all necessary details to prepare for the actual implementation, which can be generated automatically. In the first subphase, the site structure design, the conceptual navigation structure is mapped onto actual pages. Several site structures are possible, depending on device, context, and platform (e.g., different screen sizes may give rise to different site structures). During the presentation design, the general look and feel for the Web site is defined. For the different kinds of pages (e.g., home page, leaf page) templates may be defined that will serve as a base when designing the actual pages. During page design, the designer decides for each actual page the concrete interface elements (e.g., a dropdown list or radio buttons to represent a single-choice list), how to position these elements and the information and functionality described in the object chunks, and the general look and feel of the page. This results in so-called page models. In the case of a data-intensive Web site, a database or CMS (Content Management System) can be used to store the data. In this case, the actual data source, and a mapping between the conceptual data model (i.e., the object chunks) and the data source, are specified during the (logical) data design subphase. **Implementation** Given the relevant (instantiated) design models (i.e., the object chunks, navigation model, site structure model, page models), and, in the case of a data-intensive Web site, the data source and the (logical) data design, the actual Web site can be generated automatically. Literature describes two prototype implementation performing this code generation process: an XSLT-based transformation pipeline [PCY+05] and a Java-based servlet [Cas05]. For more information on implementation of Web applications in the context of Web site design methods, see Section 7.2.2. ### 3.4.3 The OOHDM Model Object-Oriented Hypermedia Design Method (OOHDM) [SR98] is one of the first methods adopted for Web application development projects. It has its roots in the hypermedia domain and focuses on helping the development of applications that involve hypertext/hypermedia paradigm features explore distributed heterogeneous information. The OOHDM method features object-oriented abstractions for analysis and design of information-intensive Web applications. Besides the modeling abstractions, and similarly to WSDM and WebML, it also provides a methodology which guides a developer through different activities in the Web application development. The main features of OOHDM are [SR98]: - **Navigation views**: OOHDM adopts a notion of navigation views for specifying how information objects should be grouped when explored by a user in a navigation session. - **Navigation contexts**: OOHDM proposes navigation contexts as grouping abstractions to organize the navigation space. - **Separation of concerns**: OOHDM features separation of concerns. The domain conceptual issues are separated from navigation issues and both of them are separated from presentation issues. Query language is used to connect models from different viewpoints. Figure 3.7 depicts the OOHDM phases together with the design models that result from them. Here we briefly describe these phases. We will concentrate on the details of some of the models and modeling techniques provided by OOHDM later, in Chapter 5. Fig. 3.7. Overview of the OOHDM design method Requirements analysis The primary goal of this phase in OOHDM is to capture and understand functional and nonfunctional requirements of the Web application. The requirements analysis, sometimes also called requirements capture, is use case driven. This means that the functional requirements are elicited with a help of use cases, actors, and stakeholders of a Web application. The use cases are further refined to scenarios which reflect use tasks. OOHDM features so called user interaction diagrams [VSdS00], which capture how a user should interact with the application when fulfilling certain use cases. Conceptual design OOHDM conceptual design is concerned with the design of information structures for representing the content provided in Web application. Well-known object-oriented principles are applied during this phase. The result is a class diagram extended with special constructs to attribute multiple values and perspectives. This feature is especially important for multi-modal Web applications and Web applications with semi structured content. The classes with relationships can be grouped into subsystems. Conceptual design is separated from other activities and deals only with application domain classes without a connection to any further application solution for viewing and organizing the content. Navigation design OOHDM navigation design is concerned with navigation structures supporting a user exploring information provided in a Web application. Cognitive issues are taken into account to reduce the information overload and to support the user in getting oriented in the information hyperspace. The navigation design produces views of the information structures, which can be different for different audiences. This is reflected in navigation context schemas where common views are grouped under one context. Navigation design may be also extended with behavioral specifications through navigation charts specifying some reactive behavior. Navigation models are connected to conceptual models. They use underlying concepts from the conceptual models to derive right perspective on information structure either by restricting it, projecting it, or transforming and combining it with other concepts. Different navigation views and schemas can be built for different purposes or users from a single conceptual model. Abstract interface design Abstract interface design follows object-oriented design principles and focuses on perceivable objects defining how navigation views should be displayed and augmented with further interaction elements, such as buttons and links. Abstract data view charts may be used to specify the behavior of presentation objects. The abstract data views follow the same principle as the navigation views. They can be seen as façade abstractions, representing different appearances of the navigation nodes to different users in different contexts. They feature also a behavioral and interactive aspect and are therefore very suitable for describing also modern interactive Web applications. Implementation OOHDM does not use any specific implementation framework. It is up to the development team to decide how to transform the results of the aforementioned phases into implementation. The development team makes a decision on architecture such as client-server, database management system to store information structures and data, application and web server to compute navigation and presentation views and handle user interaction events. Refer to [JSR02] for details on mapping to an architecture based on J2EE and the Model-View-Controller model. 3.5 Summary Summing up the lessons learned in this chapter and the considerations that led to the definition of the online evolution model, we can say that Web-specific development processes in general distinguish themselves from traditional software development processes because of the following general characteristics: - *Continuous* and fast development and release times are paramount. - Web development processes are less documentation-based and, rather, put high emphasis on *prototypes* (prototypes are much more expressive than technical documents, especially to unskilled customers). - High *user involvement* and early feedback is desirable. - A new actor enters the development process: the *graphic designer*. If we look at the activities in the development process, we can also identify the following activity-specific characteristics: - The *requirements analysis, design, and implementation* activities can be further detailed into typical Web-specific subactivities. This allows for the conception of specific processes, instruments, models, and tools, assisting developers and lowering risks. - The *implementation* activity is highly standards-based. This contributes to the fast adaptation of developers to new projects, to higher interoperability of the conceived solutions, and to elevated robustness. - The *deployment* of Web applications is typically fast. There is no need for client-side code that requires manual installation procedures, and, hence, there is no need for complicated installation and deployment processes. Installation and deployment come almost for free, and consistency among clients is implicitly guaranteed. - The continuous (online) *evolution* of Web applications is an integral part of the development process. The development cycle continues even after the deployment of the application. This may be indispensable if we want to keep the attractiveness of the Web application high and enlarge the application’s audience. The organization of the following chapters is based on the online evolution model depicted in Figure 3.3. ### 3.6 Further Readings Web engineering processes have been described from several points of view in a number of publications. The development of hypermedia-oriented Web application was discussed in [NN95, NN99]. They concentrate mostly on the design process, which is characterized as a motion in a four-dimensional space of guidelines for hypermedia application, hypermedia design and development, hypermedia system, and human factors. The readings are recommended as a general introduction to Web process guidelines frameworks. Similarly, [DB02b] concentrates on the design organization framework as a five-dimensional space of hypermedia application, notation, development process, aspect, and degree of formality. The design process is then considered as a chain of instances over the dimensions. Another Web application development model, inspired by Software Engineering Institute’s Capability Maturity Model (CMM) framework [PCCW93] and the SPICE architecture of process assessment [EG96], the IMPACT-A method [LBW99], is based on a three-dimensional space formed by following dimensions: process entities, hypermedia entities, and time. The process dimension is characterized by entities such as Resource, Activity, and Artifact. The Resource can be a tool, a skill, or a person. Hypermedia high-level entities are structure, navigation, behavior, and interaction. The method contains two high-level phases: preparatory phase and execution phase. The preparatory phase is concerned with choosing a model, the quality attributes for assessment, the measuring methods for those attributes. The execution phase is carried out for each development project. The model serves as a guide for developers to understand development. Attributes assist developers identify particular aspects that are important for assessment. Tasks are guides for attribute assessment. The Hypertext Design Model HDM [GP93], W2000 [BGP01], the Relationship Management Methodology (RMM) [ISB95], UML-based Web Engineering (UWE) [HK00], and Scenario-Based Object-Oriented Hypermedia Design Methodology (SOHDM) [LLY99] are examples of other development methods with slightly different views on the organization of activities and processes. Proceedings from the World Wide Web conference and the Web engineering conference, collections such as [MD01, KPRR03], and books such as [DLWZ03, PJC98, Pre05, CFB+02, Con00] are sources for further valuable insights into Web application development processes. Engineering Web Applications Casteleyn, S.; Daniel, F.; Dolog, P.; Matera, M. 2009, XIII, 349 p. 109 illus., Hardcover ISBN: 978-3-540-92200-1
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783540922001-c3.pdf?SGWID=0-0-45-796105-p173890004", "len_cl100k_base": 9939, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 47066, "total-output-tokens": 10975, "length": "2e13", "weborganizer": {"__label__adult": 0.00041103363037109375, "__label__art_design": 0.00033020973205566406, "__label__crime_law": 0.00026988983154296875, "__label__education_jobs": 0.0008182525634765625, "__label__entertainment": 4.5359134674072266e-05, "__label__fashion_beauty": 0.00015914440155029297, "__label__finance_business": 0.0001990795135498047, "__label__food_dining": 0.0003402233123779297, "__label__games": 0.0003819465637207031, "__label__hardware": 0.0005364418029785156, "__label__health": 0.0002789497375488281, "__label__history": 0.0002007484436035156, "__label__home_hobbies": 6.127357482910156e-05, "__label__industrial": 0.00027751922607421875, "__label__literature": 0.00021314620971679688, "__label__politics": 0.00020515918731689453, "__label__religion": 0.00036525726318359375, "__label__science_tech": 0.0016279220581054688, "__label__social_life": 6.181001663208008e-05, "__label__software": 0.00283050537109375, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0002963542938232422, "__label__transportation": 0.0003666877746582031, "__label__travel": 0.00020742416381835935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53831, 0.02023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53831, 0.39497]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53831, 0.92875]], "google_gemma-3-12b-it_contains_pii": [[0, 2059, false], [2059, 4873, null], [4873, 8022, null], [8022, 10940, null], [10940, 13724, null], [13724, 14557, null], [14557, 16311, null], [16311, 18982, null], [18982, 20936, null], [20936, 23108, null], [23108, 26049, null], [26049, 28749, null], [28749, 30658, null], [30658, 33065, null], [33065, 35879, null], [35879, 38677, null], [38677, 39970, null], [39970, 42949, null], [42949, 45460, null], [45460, 46833, null], [46833, 49446, null], [49446, 52010, null], [52010, 53689, null], [53689, 53831, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2059, true], [2059, 4873, null], [4873, 8022, null], [8022, 10940, null], [10940, 13724, null], [13724, 14557, null], [14557, 16311, null], [16311, 18982, null], [18982, 20936, null], [20936, 23108, null], [23108, 26049, null], [26049, 28749, null], [28749, 30658, null], [30658, 33065, null], [33065, 35879, null], [35879, 38677, null], [38677, 39970, null], [39970, 42949, null], [42949, 45460, null], [45460, 46833, null], [46833, 49446, null], [49446, 52010, null], [52010, 53689, null], [53689, 53831, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53831, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 53831, null]], "pdf_page_numbers": [[0, 2059, 1], [2059, 4873, 2], [4873, 8022, 3], [8022, 10940, 4], [10940, 13724, 5], [13724, 14557, 6], [14557, 16311, 7], [16311, 18982, 8], [18982, 20936, 9], [20936, 23108, 10], [23108, 26049, 11], [26049, 28749, 12], [28749, 30658, 13], [30658, 33065, 14], [33065, 35879, 15], [35879, 38677, 16], [38677, 39970, 17], [39970, 42949, 18], [42949, 45460, 19], [45460, 46833, 20], [46833, 49446, 21], [49446, 52010, 22], [52010, 53689, 23], [53689, 53831, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53831, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
bf60a9c778186c13c5e3cbc68196ab1639dd0dd8
Ogg Encapsulation for the Opus Audio Codec Abstract This document defines the Ogg encapsulation for the Opus interactive speech and audio codec. This allows data encoded in the Opus format to be stored in an Ogg logical bitstream. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7845. Copyright Notice Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. 1. Introduction The IETF Opus codec is a low-latency audio codec optimized for both voice and general-purpose audio. See [RFC6716] for technical details. This document defines the encapsulation of Opus in a continuous, logical Ogg bitstream [RFC3533]. Ogg encapsulation provides Opus with a long-term storage format supporting all of the essential features, including metadata, fast and accurate seeking, corruption detection, recapture after errors, low overhead, and the ability to multiplex Opus with other codecs (including video) with minimal buffering. It also provides a live streamable format capable of delivery over a reliable stream-oriented transport, without requiring all the data (or even the total length of the data) up-front, in a form that is identical to the on-disk storage format. Ogg bitstreams are made up of a series of "pages", each of which contains data from one or more "packets". Pages are the fundamental unit of multiplexing in an Ogg stream. Each page is associated with a particular logical stream and contains a capture pattern and checksum, flags to mark the beginning and end of the logical stream, and a "granule position" that represents an absolute position in the stream, to aid seeking. A single page can contain up to 65,025 octets of packet data from up to 255 different packets. Packets can be split arbitrarily across pages and continued from one page to the next (allowing packets much larger than would fit on a single page). Each page contains "lacing values" that indicate how the data is partitioned into packets, allowing a demultiplexer (demuxer) to recover the packet boundaries without examining the encoded data. A packet is said to "complete" on a page when the page contains the final lacing value corresponding to that packet. This encapsulation defines the contents of the packet data, including the necessary headers, the organization of those packets into a logical stream, and the interpretation of the codec-specific granule position field. It does not attempt to describe or specify the existing Ogg container format. Readers unfamiliar with the basic concepts mentioned above are encouraged to review the details in [RFC3533]. 2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 3. Packet Organization An Ogg Opus stream is organized as follows (see Figure 1 for an example). <table> <thead> <tr> <th>Page 0</th> <th>Pages 1 ... n</th> <th>Pages (n+1) ...</th> </tr> </thead> <tbody> <tr> <td>ID Header</td> <td>Comment Header</td> <td>Audio Data Packet 1</td> </tr> </tbody> </table> ^ ^ Mandatory Page Break ID header is contained on a single page 'Beginning Of Stream' Figure 1: Example Packet Organization for a Logical Ogg Opus Stream There are two mandatory header packets. The first packet in the logical Ogg bitstream MUST contain the identification (ID) header, which uniquely identifies a stream as Opus audio. The format of this header is defined in Section 5.1. It is placed alone (without any other packet data) on the first page of the logical Ogg bitstream and completes on that page. This page has its 'beginning of stream' flag set. The second packet in the logical Ogg bitstream MUST contain the comment header, which contains user-supplied metadata. The format of this header is defined in Section 5.2. It MAY span multiple pages, beginning on the second page of the logical stream. However many pages it spans, the comment header packet MUST finish the page on which it completes. All subsequent pages are audio data pages, and the Ogg packets they contain are audio data packets. Each audio data packet contains one Opus packet for each of N different streams, where N is typically one for mono or stereo, but MAY be greater than one for multichannel audio. The value N is specified in the ID header (see Section 5.1.1), and is fixed over the entire length of the logical Ogg bitstream. The first \((N - 1)\) Opus packets, if any, are packed one after another into the Ogg packet, using the self-delimiting framing from Appendix B of [RFC6716]. The remaining Opus packet is packed at the end of the Ogg packet using the regular, undelimited framing from Section 3 of [RFC6716]. All of the Opus packets in a single Ogg packet MUST be constrained to have the same duration. An implementation of this specification SHOULD treat any Opus packet whose duration is different from that of the first Opus packet in an Ogg packet as if it were a malformed Opus packet with an invalid Table Of Contents (TOC) sequence. The TOC sequence at the beginning of each Opus packet indicates the coding mode, audio bandwidth, channel count, duration (frame size), and number of frames per packet, as described in Section 3.1 of [RFC6716]. The coding mode is one of SILK, Hybrid, or Constrained Energy Lapped Transform (CELT). The combination of coding mode, audio bandwidth, and frame size is referred to as the configuration of an Opus packet. Packets are placed into Ogg pages in order until the end of stream. Audio data packets might span page boundaries. The first audio data page could have the 'continued packet' flag set (indicating the first audio data packet is continued from a previous page) if, for example, it was a live stream joined mid-broadcast, with the headers pasted on the front. If a page has the 'continued packet' flag set and one of the following conditions is also true: - the previous page with packet data does not end in a continued packet (does not end with a lacing value of 255) OR - the page sequence numbers are not consecutive, then a demuxer MUST NOT attempt to decode the data for the first packet on the page unless the demuxer has some special knowledge that would allow it to interpret this data despite the missing pieces. An implementation MUST treat a zero-octet audio data packet as if it were a malformed Opus packet as described in Section 3.4 of [RFC6716]. A logical stream ends with a page with the 'end of stream' flag set, but implementations need to be prepared to deal with truncated streams that do not have a page marked 'end of stream'. There is no reason for the final packet on the last page to be a continued packet, i.e., for the final lacing value to be 255. However, demuxers might encounter such streams, possibly as the result of a transfer that did not complete or of corruption. If a packet continues onto a subsequent page (i.e., when the page ends with a lacing value of 255) and one of the following conditions is also true: - the next page with packet data does not have the ‘continued packet’ flag set, OR - there is no next page with packet data, OR - the page sequence numbers are not consecutive, then a demuxer MUST NOT attempt to decode the data from that packet unless the demuxer has some special knowledge that would allow it to interpret this data despite the missing pieces. There MUST NOT be any more pages in an Opus logical bitstream after a page marked ‘end of stream’. 4. Granule Position The granule position MUST be zero for the ID header page and the page where the comment header completes. That is, the first page in the logical stream and the last header page before the first audio data page both have a granule position of zero. The granule position of an audio data page encodes the total number of PCM samples in the stream up to and including the last fully decodable sample from the last packet completed on that page. The granule position of the first audio data page will usually be larger than zero, as described in Section 4.5. A page that is entirely spanned by a single packet (that completes on a subsequent page) has no granule position, and the granule position field is set to the special value ‘-1’ in two’s complement. The granule position of an audio data page is in units of PCM audio samples at a fixed rate of 48 kHz (per channel; a stereo stream’s granule position does not increment at twice the speed of a mono stream). It is possible to run an Opus decoder at other sampling rates, but all Opus packets encode samples at a sampling rate that evenly divides 48 kHz. Therefore, the value in the granule position field always counts samples assuming a 48 kHz decoding rate, and the rest of this specification makes the same assumption. The duration of an Opus packet as defined in [RFC6716] can be any multiple of 2.5 ms, up to a maximum of 120 ms. This duration is encoded in the TOC sequence at the beginning of each packet. The number of samples returned by a decoder corresponds to this duration exactly, even for the first few packets. For example, a 20 ms packet fed to a decoder running at 48 kHz will always return 960 samples. A demuxer can parse the TOC sequence at the beginning of each Ogg packet to work backwards or forwards from a packet with a known granule position (i.e., the last packet completed on some page) in order to assign granule positions to every packet, or even every individual sample. The one exception is the last page in the stream, as described below. All other pages with completed packets after the first MUST have a granule position equal to the number of samples contained in packets that complete on that page plus the granule position of the most recent page with completed packets. This guarantees that a demuxer can assign individual packets the same granule position when working forwards as when working backwards. For this to work, there cannot be any gaps. 4.1. Repairing Gaps in Real-Time Streams In order to support capturing a real-time stream that has lost or not transmitted packets, a multiplexer (muxer) SHOULD emit packets that explicitly request the use of Packet Loss Concealment (PLC) in place of the missing packets. Implementations that fail to do so still MUST NOT increment the granule position for a page by anything other than the number of samples contained in packets that actually complete on that page. Only gaps that are a multiple of 2.5 ms are repairable, as these are the only durations that can be created by packet loss or discontinuous transmission. Muxers need not handle other gap sizes. Creating the necessary packets involves synthesizing a TOC byte (defined in Section 3.1 of [RFC6716]) -- and whatever additional internal framing is needed -- to indicate the packet duration for each stream. The actual length of each missing Opus frame inside the packet is zero bytes, as defined in Section 3.2.1 of [RFC6716]. Zero-byte frames MAY be packed into packets using any of codes 0, 1, 2, or 3. When successive frames have the same configuration, the higher code packings reduce overhead. Likewise, if the TOC configuration matches, the muxer MAY further combine the empty frames with previous or subsequent nonzero-length frames (using code 2 or variable bitrate (VBR) code 3). [RFC6716] does not impose any requirements on the PLC, but this section outlines choices that are expected to have a positive influence on most PLC implementations, including the reference implementation. Synthesized TOC sequences SHOULD maintain the same mode, audio bandwidth, channel count, and frame size as the previous packet (if any). This is the simplest and usually the most well- tested case for the PLC to handle and it covers all losses that do not include a configuration switch, as defined in Section 4.5 of [RFC6716]. When a previous packet is available, keeping the audio bandwidth and channel count the same allows the PLC to provide maximum continuity in the concealment data it generates. However, if the size of the gap is not a multiple of the most recent frame size, then the frame size will have to change for at least some frames. Such changes SHOULD be delayed as long as possible to simplify things for PLC implementations. As an example, a 95 ms gap could be encoded as nineteen 5 ms frames in two bytes with a single constant bitrate (CBR) code 3 packet. If the previous frame size was 20 ms, using four 20 ms frames followed by three 5 ms frames requires 4 bytes (plus an extra byte of Ogg lacing overhead), but allows the PLC to use its well-tested steady state behavior for as long as possible. The total bitrate of the latter approach, including Ogg overhead, is about 0.4 kbps, so the impact on file size is minimal. Changing modes is discouraged, since this causes some decoder implementations to reset their PLC state. However, SILK and Hybrid mode frames cannot fill gaps that are not a multiple of 10 ms. If switching to CELT mode is needed to match the gap size, a muxer SHOULD do so at the end of the gap to allow the PLC to function for as long as possible. In the example above, if the previous frame was a 20 ms SILK mode frame, the better solution is to synthesize a packet describing four 20 ms SILK frames, followed by a packet with a single 10 ms SILK frame, and finally a packet with a 5 ms CELT frame, to fill the 95 ms gap. This also requires four bytes to describe the synthesized packet data (two bytes for a CBR code 3 and one byte each for two code 0 packets) but three bytes of Ogg lacing overhead are needed to mark the packet boundaries. At 0.6 kbps, this is still a minimal bitrate impact over a naive, low-quality solution. Since medium-band audio is an option only in the SILK mode, wideband frames SHOULD be generated if switching from that configuration to CELT mode, to ensure that any PLC implementation that does try to migrate state between the modes will be able to preserve all of the available audio bandwidth. 4.2. Pre-skip There is some amount of latency introduced during the decoding process, to allow for overlap in the CELT mode, stereo mixing in the SILK mode, and resampling. The encoder might have introduced additional latency through its own resampling and analysis (though the exact amount is not specified). Therefore, the first few samples produced by the decoder do not correspond to real input audio, but are instead composed of padding inserted by the encoder to compensate for this latency. These samples need to be stored and decoded, as Opus is an asymptotically convergent predictive codec, meaning the decoded contents of each frame depend on the recent history of decoder inputs. However, a player will want to skip these samples after decoding them. A ‘pre-skip’ field in the ID header (see Section 5.1) signals the number of samples that SHOULD be skipped (decoded but discarded) at the beginning of the stream, though some specific applications might have a reason for looking at that data. This amount need not be a multiple of 2.5 ms, MAY be smaller than a single packet, or MAY span the contents of several packets. These samples are not valid audio. For example, if the first Opus frame uses the CELT mode, it will always produce 120 samples of windowed overlap-add data. However, the overlap data is initially all zeros (since there is no prior frame), meaning this cannot, in general, accurately represent the original audio. The SILK mode requires additional delay to account for its analysis and resampling latency. The encoder delays the original audio to avoid this problem. The ‘pre-skip’ field MAY also be used to perform sample-accurate cropping of already encoded streams. In this case, a value of at least 3840 samples (80 ms) provides sufficient history to the decoder that it will have converged before the stream’s output begins. 4.3. PCM Sample Position The PCM sample position is determined from the granule position using the following formula: ‘PCM sample position’ = ‘granule position’ - ‘pre-skip’ For example, if the granule position of the first audio data page is 59,971, and the pre-skip is 11,971, then the PCM sample position of the last decoded sample from that page is 48,000. This can be converted into a playback time using the following formula: \[ \text{playback time} = \frac{\text{PCM sample position}}{48000.0} \] The initial PCM sample position before any samples are played is normally ‘0’. In this case, the PCM sample position of the first audio sample to be played starts at ‘1’, because it marks the time on the clock _after_ that sample has been played, and a stream that is exactly one second long has a final PCM sample position of ‘48000’, as in the example here. Vorbis streams use a granule position smaller than the number of audio samples contained in the first audio data page to indicate that some of those samples are trimmed from the output (see [VORBIS-TRIM]). However, to do so, Vorbis requires that the first audio data page contains exactly two packets, in order to allow the decoder to perform PCM position adjustments before needing to return any PCM data. Opus uses the pre-skip mechanism for this purpose instead, since the encoder might introduce more than a single packet’s worth of latency, and since very large packets in streams with a very large number of channels might not fit on a single page. 4.4. End Trimming The page with the ‘end of stream’ flag set MAY have a granule position that indicates the page contains less audio data than would normally be returned by decoding up through the final packet. This is used to end the stream somewhere other than an even frame boundary. The granule position of the most recent audio data page with completed packets is used to make this determination, or ‘0’ is used if there were no previous audio data pages with a completed packet. The difference between these granule positions indicates how many samples to keep after decoding the packets that completed on the final page. The remaining samples are discarded. The number of discarded samples SHOULD be no larger than the number decoded from the last packet. 4.5. Restrictions on the Initial Granule Position The granule position of the first audio data page with a completed packet MAY be larger than the number of samples contained in packets that complete on that page. However, it MUST NOT be smaller, unless that page has the ‘end of stream’ flag set. Allowing a granule position larger than the number of samples allows the beginning of a stream to be cropped or a live stream to be joined without rewriting... the granule position of all the remaining pages. This means that the PCM sample position just before the first sample to be played MAY be larger than ‘0’. Synchronization when multiplexing with other logical streams still uses the PCM sample position relative to ‘0’ to compute sample times. This does not affect the behavior of pre-skip: exactly ‘pre-skip’ samples SHOULD be skipped from the beginning of the decoded output, even if the initial PCM sample position is greater than zero. On the other hand, a granule position that is smaller than the number of decoded samples prevents a demuxer from working backwards to assign each packet or each individual sample a valid granule position, since granule positions are non-negative. An implementation MUST treat any stream as invalid if the granule position is smaller than the number of samples contained in packets that complete on the first audio data page with a completed packet, unless that page has the ‘end of stream’ flag set. It MAY defer this action until it decodes the last packet completed on that page. If that page has the ‘end of stream’ flag set, a demuxer MUST treat any stream as invalid if its granule position is smaller than the ‘pre-skip’ amount. This would indicate that there are more samples to be skipped from the initial decoded output than exist in the stream. If the granule position is smaller than the number of decoded samples produced by the packets that complete on that page, then a demuxer MUST use an initial granule position of ‘0’, and can work forwards from ‘0’ to timestamp individual packets. If the granule position is larger than the number of decoded samples available, then the demuxer MUST still work backwards as described above, even if the ‘end of stream’ flag is set, to determine the initial granule position, and thus the initial PCM sample position. Both of these will be greater than ‘0’ in this case. 4.6. Seeking and Pre-roll Seeking in Ogg files is best performed using a bisection search for a page whose granule position corresponds to a PCM position at or before the seek target. With appropriately weighted bisection, accurate seeking can be performed in just one or two bisections on average, even in multi-gigabyte files. See [SEEKING] for an example of general implementation guidance. When seeking within an Ogg Opus stream, an implementation SHOULD start decoding (and discarding the output) at least 3840 samples (80 ms) prior to the seek target in order to ensure that the output audio is correct by the time it reaches the seek target. This "pre-roll" is separate from, and unrelated to, the pre-skip used at the beginning of the stream. If the point 80 ms prior to the seek target comes before the initial PCM sample position, an implementation SHOULD start decoding from the beginning of the stream, applying pre-skip as normal, regardless of whether the pre- skip is larger or smaller than 80 ms, and then continue to discard samples to reach the seek target (if any). 5. Header Packets An Ogg Opus logical stream contains exactly two mandatory header packets: an identification header and a comment header. 5.1. Identification Header 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 'O' | 'p' | 'u' | 's' | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 'H' | 'e' | 'a' | 'd' | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Version = 1 | Channel Count | Pre-skip | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Input Sample Rate (Hz) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Output Gain (Q7.8 in dB) | Mapping Family| | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | | | Optional Channel Mapping Table... | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 2: ID Header Packet The fields in the identification (ID) header have the following meaning: 1. Magic Signature: This is an 8-octet (64-bit) field that allows codec identification and is human readable. It contains, in order, the magic numbers: 0x4F 'O' 0x70 'p' 0x75 'u' Terriberry, et al. Standards Track Starting with "Op" helps distinguish it from audio data packets, as this is an invalid TOC sequence. 2. Version (8 bits, unsigned): The version number MUST always be '1' for this version of the encapsulation specification. Implementations SHOULD treat streams where the upper four bits of the version number match that of a recognized specification as backwards compatible with that specification. That is, the version number can be split into "major" and "minor" version sub-fields, with changes to the minor sub-field (in the lower four bits) signaling compatible changes. For example, an implementation of this specification SHOULD accept any stream with a version number of '15' or less, and SHOULD assume any stream with a version number '16' or greater is incompatible. The initial version '1' was chosen to keep implementations from relying on this octet as a null terminator for the "OpusHead" string. 3. Output Channel Count 'C' (8 bits, unsigned): This is the number of output channels. This might be different than the number of encoded channels, which can change on a packet-by-packet basis. This value MUST NOT be zero. The maximum allowable value depends on the channel mapping family, and might be as large as 255. See Section 5.1.1 for details. 4. Pre-skip (16 bits, unsigned, little endian): This is the number of samples (at 48 kHz) to discard from the decoder output when starting playback, and also the number to subtract from a page’s granule position to calculate its PCM sample position. When cropping the beginning of existing Ogg Opus streams, a pre-skip of at least 3,840 samples (80 ms) is RECOMMENDED to ensure complete convergence in the decoder. 5. Input Sample Rate (32 bits, unsigned, little endian): This is the sample rate of the original input (before encoding), in Hz. This field is _not_ the sample rate to use for playback of the encoded data. Opus can switch between internal audio bandwidths of 4, 6, 8, 12, and 20 kHz. Each packet in the stream can have a different audio bandwidth. Regardless of the audio bandwidth, the reference decoder supports decoding any stream at a sample rate of 8, 12, 16, 24, or 48 kHz. The original sample rate of the audio passed to the encoder is not preserved by the lossy compression. An Ogg Opus player SHOULD select the playback sample rate according to the following procedure: 1. If the hardware supports 48 kHz playback, decode at 48 kHz. 2. Otherwise, if the hardware’s highest available sample rate is a supported rate, decode at this sample rate. 3. Otherwise, if the hardware’s highest available sample rate is less than 48 kHz, decode at the next higher Opus supported rate above the highest available hardware rate and resample. 4. Otherwise, decode at 48 kHz and resample. However, the ‘input sample rate’ field allows the muxer to pass the sample rate of the original input stream as metadata. This is useful when the user requires the output sample rate to match the input sample rate. For example, when not playing the output, an implementation writing PCM format samples to disk might choose to resample the audio back to the original input sample rate to reduce surprise to the user, who might reasonably expect to get back a file with the same sample rate. A value of zero indicates "unspecified". Muxers SHOULD write the actual input sample rate or zero, but implementations that do something with this field SHOULD take care to behave sanely if given crazy values (e.g., do not actually upsample the output to 10 MHz if requested). Implementations SHOULD support input sample rates between 8 kHz and 192 kHz (inclusive). Rates outside this range MAY be ignored by falling back to the default rate of 48 kHz instead. 6. Output Gain (16 bits, signed, little endian): This is a gain to be applied when decoding. It is 20*log10 of the factor by which to scale the decoder output to achieve the desired playback volume, stored in a 16-bit, signed, two’s complement fixed-point value with 8 fractional bits (i.e., Q7.8 [Q-NOTATION]). To apply the gain, an implementation could use the following: \[ \text{sample} \times= \text{pow}(10, \frac{\text{output\_gain}}{20.0 \times 256}) \] where ‘output\_gain’ is the raw 16-bit value from the header. Players and media frameworks SHOULD apply it by default. If a player chooses to apply any volume adjustment or gain modification, such as the R128\_TRACK\_GAIN (see Section 5.2), the adjustment MUST be applied in addition to this output gain in order to achieve playback at the normalized volume. A muxer SHOULD set this field to zero, and instead apply any gain prior to encoding, when this is possible and does not conflict with the user’s wishes. A nonzero output gain indicates the gain was adjusted after encoding, or that a user wished to adjust the gain for playback while preserving the ability to recover the original signal amplitude. Although the output gain has enormous range (+/- 128 dB, enough to amplify inaudible sounds to the threshold of physical pain), most applications can only reasonably use a small portion of this range around zero. The large range serves in part to ensure that gain can always be losslessly transferred between OpusHead and R128 gain tags (see below) without saturating. 7. Channel Mapping Family (8 bits, unsigned): This octet indicates the order and semantic meaning of the output channels. Each currently specified value of this octet indicates a mapping family, which defines a set of allowed channel counts, and the ordered set of channel names for each allowed channel count. The details are described in Section 5.1.1. 8. Channel Mapping Table: This table defines the mapping from encoded streams to output channels. Its contents are specified in Section 5.1.1. All fields in the ID headers are REQUIRED, except for ‘channel mapping table’, which MUST be omitted when the channel mapping family is 0, but is REQUIRED otherwise. Implementations SHOULD treat a stream as invalid if it contains an ID header that does not have enough data for these fields, even if it contain a valid ‘magic signature’. Future versions of this specification, even backwards-compatible versions, might include additional fields in the ID header. If an ID header has a compatible major version, but a larger minor version, an implementation MUST NOT treat it as invalid for containing additional data not specified here, provided it still completes on the first page. 5.1.1. Channel Mapping An Ogg Opus stream allows mapping one number of Opus streams (N) to a possibly larger number of decoded channels (M + N) to yet another number of output channels (C), which might be larger or smaller than the number of decoded channels. The order and meaning of these channels are defined by a channel mapping, which consists of the ‘channel mapping family’ octet and, for channel mapping families other than family 0, a ‘channel mapping table’, as illustrated in Figure 3. ``` 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Stream Count | Coupled Count | Channel Mapping... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ``` Figure 3: Channel Mapping Table The fields in the channel mapping table have the following meaning: 1. Stream Count ‘N’ (8 bits, unsigned): This is the total number of streams encoded in each Ogg packet. This value is necessary to correctly parse the packed Opus packets inside an Ogg packet, as described in Section 3. This value MUST NOT be zero, as without at least one Opus packet with a valid TOC sequence, a demuxer cannot recover the duration of an Ogg packet. For channel mapping family 0, this value defaults to 1, and is not coded. 2. Coupled Stream Count 'M' (8 bits, unsigned): This is the number of streams whose decoders are to be configured to produce two channels (stereo). This MUST be no larger than the total number of streams, N. Each packet in an Opus stream has an internal channel count of 1 or 2, which can change from packet to packet. This is selected by the encoder depending on the bitrate and the audio being encoded. The original channel count of the audio passed to the encoder is not necessarily preserved by the lossy compression. Regardless of the internal channel count, any Opus stream can be decoded as mono (a single channel) or stereo (two channels) by appropriate initialization of the decoder. The 'coupled stream count' field indicates that the decoders for the first M Opus streams are to be initialized for stereo (two-channel) output, and the remaining (N - M) decoders are to be initialized for mono (a single channel) only. The total number of decoded channels, (M + N), MUST be no larger than 255, as there is no way to index more channels than that in the channel mapping. For channel mapping family 0, this value defaults to (C - 1) (i.e., 0 for mono and 1 for stereo), and is not coded. 3. Channel Mapping (8*C bits): This contains one octet per output channel, indicating which decoded channel is to be used for each one. Let 'index' be the value of this octet for a particular output channel. This value MUST either be smaller than (M + N) or be the special value 255. If 'index' is less than 2*M, the output MUST be taken from decoding stream ('index'/2) as stereo and selecting the left channel if 'index' is even, and the right channel if 'index' is odd. If 'index' is 2*M or larger, but less than 255, the output MUST be taken from decoding stream ('index' - M) as mono. If 'index' is 255, the corresponding output channel MUST contain pure silence. The number of output channels, C, is not constrained to match the number of decoded channels (M + N). A single index value MAY appear multiple times, i.e., the same decoded channel might be mapped to multiple output channels. Some decoded channels might not be assigned to any output channel, as well. For channel mapping family 0, the first index defaults to 0, and if \( C == 2 \), the second index defaults to 1. Neither index is coded. After producing the output channels, the channel mapping family determines the semantic meaning of each one. There are three defined mapping families in this specification. 5.1.1.1. Channel Mapping Family 0 Allowed numbers of channels: 1 or 2. RTP mapping. This is the same channel interpretation as [RFC7587]. - 1 channel: monophonic (mono). - 2 channels: stereo (left, right). Special mapping: This channel mapping family also indicates that the content consists of a single Opus stream that is stereo if and only if \( C == 2 \), with stream index 0 mapped to output channel 0 (mono, or left channel) and stream index 1 mapped to output channel 1 (right channel) if stereo. When the ‘channel mapping family’ octet has this value, the channel mapping table MUST be omitted from the ID header packet. 5.1.1.2. Channel Mapping Family 1 Allowed numbers of channels: 1...8. Vorbis channel order (see below). Each channel is assigned to a speaker location in a conventional surround arrangement. Specific locations depend on the number of channels, and are given below in order of the corresponding channel indices. - 1 channel: monophonic (mono). - 2 channels: stereo (left, right). - 3 channels: linear surround (left, center, right). - 4 channels: quadrrophonic (front left, front right, rear left, rear right). - 5 channels: 5.0 surround (front left, front center, front right, rear left, rear right). 6 channels: 5.1 surround (front left, front center, front right, rear left, rear right, LFE). 7 channels: 6.1 surround (front left, front center, front right, side left, side right, rear center, LFE). 8 channels: 7.1 surround (front left, front center, front right, side left, side right, rear left, rear right, LFE). This set of surround options and speaker location orderings is the same as those used by the Vorbis codec [VORBIS-MAPPING]. The ordering is different from the one used by the WAVE [WAVE-MULTICHANNEL] and Free Lossless Audio Codec (FLAC) [FLAC] formats, so correct ordering requires permutation of the output channels when decoding to or encoding from those formats. "LFE" here refers to a Low Frequency Effects channel, often mapped to a subwoofer with no particular spatial position. Implementations SHOULD identify "side" or "rear" speaker locations with "surround" and "back" as appropriate when interfacing with audio formats or systems that prefer that terminology. 5.1.1.3. Channel Mapping Family 255 Allowed numbers of channels: 1...255. No defined channel meaning. Channels are unidentified. General-purpose players SHOULD NOT attempt to play these streams. Offline implementations MAY deinterleave the output into separate PCM files, one per channel. Implementations SHOULD NOT produce output for channels mapped to stream index 255 (pure silence) unless they have no other way to indicate the index of non-silent channels. 5.1.1.4. Undefined Channel Mappings The remaining channel mapping families (2...254) are reserved. A demuxer implementation encountering a reserved 'channel mapping family' value SHOULD act as though the value is 255. 5.1.1.5. Downmixing An Ogg Opus player MUST support any valid channel mapping with a channel mapping family of 0 or 1, even if the number of channels does not match the physically connected audio hardware. Players SHOULD perform channel mixing to increase or reduce the number of channels as needed. Implementations MAY use the matrices in Figures 4 through 9 to implement downmixing from multichannel files using channel mapping family 1 (Section 5.1.1.2), which are known to give acceptable results for stereo. Matrices for 3 and 4 channels are normalized so each coefficient row sums to 1 to avoid clipping. For 5 or more channels, they are normalized to 2 as a compromise between clipping and dynamic range reduction. In these matrices the front-left and front-right channels are generally passed through directly. When a surround channel is split between both the left and right stereo channels, coefficients are chosen so their squares sum to 1, which helps preserve the perceived intensity. Rear channels are mixed more diffusely or attenuated to maintain focus on the front channels. \[ \begin{align*} L \text{ output} &= ( 0.585786 \times \text{left} + 0.414214 \times \text{center} ) \\ R \text{ output} &= ( 0.414214 \times \text{center} + 0.585786 \times \text{right} ) \end{align*} \] Exact coefficient values are 1 and 1/sqrt(2), multiplied by 1/(1 + 1/sqrt(2)) for normalization. Figure 4: Stereo Downmix Matrix for the Linear Surround Channel Mapping \[ \begin{bmatrix} L \text{ output} \\ R \text{ output} \end{bmatrix} = \begin{bmatrix} 0.422650 & 0.000000 & 0.366025 & 0.211325 \\ 0.000000 & 0.422650 & 0.211325 & 0.366025 \end{bmatrix} \] Exact coefficient values are 1, sqrt(3)/2 and 1/2, multiplied by 2/(1 + sqrt(3)/2 + 1/2) for normalization. Figure 5: Stereo Downmix Matrix for the Quadraphonic Channel Mapping \[ \begin{bmatrix} L \\ R \end{bmatrix} = \begin{bmatrix} 0.650802 & 0.460186 & 0.000000 & 0.563611 & 0.325401 \\ 0.000000 & 0.460186 & 0.650802 & 0.325401 & 0.563611 \end{bmatrix} \] Figure 6: Stereo Downmix Matrix for the 5.0 Surround Mapping \[ \begin{bmatrix} 0.529067 & 0.374107 & 0.000000 & 0.458186 & 0.264534 & 0.374107 \\ 0.000000 & 0.374107 & 0.529067 & 0.264534 & 0.458186 & 0.374107 \\ \end{bmatrix} \] \[ \begin{bmatrix} 0.000000 & 0.374107 & 0.529067 & 0.264534 & 0.458186 & 0.374107 \\ \end{bmatrix} \] Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2 and 1/2, multiplied by \(2/(1 + 1/sqrt(2) + sqrt(3)/2 + 1/2 + 1/sqrt(2))\) for normalization. **Figure 7:** Stereo Downmix Matrix for the 5.1 Surround Mapping \[ \begin{bmatrix} 0.455310 & 0.321953 & 0.000000 & 0.394310 & 0.227655 & 0.321953 \\ 0.000000 & 0.321953 & 0.455310 & 0.227655 & 0.394310 & 0.278819 \\ \end{bmatrix} \] Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2, 1/2 and sqrt(3)/2/sqrt(2), multiplied by \(2/(1 + 1/sqrt(2) + sqrt(3)/2 + 1/2 + sqrt(3)/2/sqrt(2) + 1/sqrt(2))\) for normalization. The coefficients are in the same order as in Section 5.1.1.2 and the matrices above. **Figure 8:** Stereo Downmix Matrix for the 6.1 Surround Mapping \[ \begin{bmatrix} 0.388631 & .274804 & .000000 & .336565 & .194316 & .336565 & .194316 & .274804 \\ 0.000000 & .274804 & .388631 & .194316 & .336565 & .194316 & .336565 & .274804 \\ \end{bmatrix} \] Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2 and 1/2, multiplied by \(2/(2 + 2/sqrt(2) + sqrt(3))\) for normalization. The coefficients are in the same order as in Section 5.1.1.2 and the matrices above. **Figure 9:** Stereo Downmix Matrix for the 7.1 Surround Mapping 5.2. Comment Header The comment header consists of a 64-bit ‘magic signature’ field, followed by data in the same format as the [VORBIS-COMMENT] header used in Ogg Vorbis, except (like Ogg Theora and Speex) the final ‘framing bit’ specified in the Vorbis specification is not present. 1. Magic Signature: This is an 8-octet (64-bit) field that allows codec identification and is human readable. It contains, in order, the magic numbers: - 0x4F 'O' - 0x70 'p' - 0x75 'u' - 0x73 's' - 0x54 'T' - 0x61 'a' - 0x67 'g' - 0x73 's' Starting with "Op" helps distinguish it from audio data packets, as this is an invalid TOC sequence. 2. Vendor String Length (32 bits, unsigned, little endian): This field gives the length of the following vendor string, in octets. It MUST NOT indicate that the vendor string is longer than the rest of the packet. 3. Vendor String (variable length, UTF-8 vector): This is a simple human-readable tag for vendor information, encoded as a UTF-8 string [RFC3629]. No terminating null octet is necessary. This tag is intended to identify the codec encoder and encapsulation implementations, for tracing differences in technical behavior. User-facing applications can use the 'ENCODER' user comment tag to identify themselves. 4. User Comment List Length (32 bits, unsigned, little endian): This field indicates the number of user-supplied comments. It MAY indicate there are zero user-supplied comments, in which case there are no additional fields in the packet. It MUST NOT indicate that there are so many comments that the comment string lengths would require more data than is available in the rest of the packet. 5. User Comment #i String Length (32 bits, unsigned, little endian): This field gives the length of the following user comment string, in octets. There is one for each user comment indicated by the ‘user comment list length’ field. It MUST NOT indicate that the string is longer than the rest of the packet. 6. User Comment #i String (variable length, UTF-8 vector): This field contains a single user comment encoded as a UTF-8 string [RFC3629]. There is one for each user comment indicated by the ‘user comment list length’ field. The ‘vendor string length’ and ‘user comment list length’ fields are REQUIRED, and implementations SHOULD treat a stream as invalid if it contains a comment header that does not have enough data for these fields, or that does not contain enough data for the corresponding vendor string or user comments they describe. Making this check before allocating the associated memory to contain the data helps prevent a possible Denial-of-Service (DoS) attack from small comment headers that claim to contain strings longer than the entire packet or more user comments than could possibly fit in the packet. Immediately following the user comment list, the comment header MAY contain zero-padding or other binary data that is not specified here. If the least-significant bit of the first byte of this data is 1, then editors SHOULD preserve the contents of this data when updating the tags, but if this bit is 0, all such data MAY be treated as padding, and truncated or discarded as desired. This allows informal experimentation with the format of this binary data until it can be specified later. The comment header can be arbitrarily large and might be spread over a large number of Ogg pages. Implementations MUST avoid attempting to allocate excessive amounts of memory when presented with a very large comment header. To accomplish this, implementations MAY treat a stream as invalid if it has a comment header larger than 125,829,120 octets (120 MB), and MAY ignore individual comments that are not fully contained within the first 61,440 octets of the comment header. 5.2.1. Tag Definitions The user comment strings follow the NAME=value format described by [VORBIS-COMMENT] with the same recommended tag names: ARTIST, TITLE, DATE, ALBUM, and so on. Two new comment tags are introduced here: First, an optional gain for track normalization: R128_TRACK_GAIN=-573 representing the volume shift needed to normalize the track’s volume during isolated playback, in random shuffle, and so on. The gain is a Q7.8 fixed-point number in dB, as in the ID header’s ‘output gain’ field. This tag is similar to the REPLAYGAIN_TRACK_GAIN tag in Vorbis [REPLAY-GAIN], except that the normal volume reference is the [EBU-R128] standard. Second, an optional gain for album normalization: R128_ALBUM_GAIN=111 representing the volume shift needed to normalize the overall volume when played as part of a particular collection of tracks. The gain is also a Q7.8 fixed-point number in dB, as in the ID header’s ‘output gain’ field. The values ‘-573’ and ‘111’ given here are just examples. An Ogg Opus stream MUST NOT have more than one of each of these tags, and, if present, their values MUST be an integer from -32768 to 32767, inclusive, represented in ASCII as a base 10 number with no whitespace. A leading ‘+’ or ‘-’ character is valid. Leading zeros are also permitted, but the value MUST be represented by no more than 6 characters. Other non-digit characters MUST NOT be present. If present, R128_TRACK_GAIN and R128_ALBUM_GAIN MUST correctly represent the R128 normalization gain relative to the ‘output gain’ field specified in the ID header. If a player chooses to make use of the R128_TRACK_GAIN tag or the R128_ALBUM_GAIN tag, it MUST apply those gains _in addition_ to the ‘output gain’ value. If a tool modifies the ID header’s ‘output gain’ field, it MUST also update or remove the R128_TRACK_GAIN and R128_ALBUM_GAIN comment tags if present. A muxer SHOULD place the gain it wants other tools to use by default into the ‘output gain’ field, and not the comment tag. To avoid confusion with multiple normalization schemes, an Opus comment header SHOULD NOT contain any of the REPLAYGAIN_TRACK_GAIN, REPLAYGAIN_TRACK_PEAK, REPLAYGAIN_ALBUM_GAIN, or REPLAYGAIN_ALBUM_PEAK tags, unless they are only to be used in some context where there is guaranteed to be no such confusion. [EBU-R128] normalization is preferred to the earlier REPLAYGAIN schemes because of its clear definition and adoption by industry. Peak normalizations are difficult to calculate reliably for lossy codecs because of variation in excursion heights due to decoder differences. In the authors’ investigations, they were not applied consistently or broadly enough to merit inclusion here. 6. Packet Size Limits Technically, valid Opus packets can be arbitrarily large due to the padding format, although the amount of non-padding data they can contain is bounded. These packets might be spread over a similarly enormous number of Ogg pages. When encoding, implementations SHOULD limit the use of padding in audio data packets to no more than is necessary to make a VBR stream CBR, unless they have no reasonable way to determine what is necessary. Demuxers SHOULD treat audio data packets as invalid (treat them as if they were malformed Opus packets with an invalid TOC sequence) if they are larger than 61,440 octets per Opus stream, unless they have a specific reason for allowing extra padding. Such packets necessarily contain more padding than needed to make a stream CBR. Demuxers MUST avoid attempting to allocate excessive amounts of memory when presented with a very large packet. Demuxers MAY treat audio data packets as invalid or partially process them if they are larger than 61,440 octets in an Ogg Opus stream with channel mapping families 0 or 1. Demuxers MAY treat audio data packets as invalid or partially process them in any Ogg Opus stream if the packet is larger than 61,440 octets and also larger than 7,680 octets per Opus stream. The presence of an extremely large packet in the stream could indicate a memory exhaustion attack or stream corruption. In an Ogg Opus stream, the largest possible valid packet that does not use padding has a size of \((61,298^N - 2)\) octets. With 255 streams, this is 15,630,988 octets and can span up to 61,298 Ogg pages, all but one of which will have a granule position of \(-1\). This is, of course, a very extreme packet, consisting of 255 streams, each containing 120 ms of audio encoded as 2.5 ms frames, each frame using the maximum possible number of octets (1275) and stored in the least efficient manner allowed (a VBR code 3 Opus packet). Even in such a packet, most of the data will be zeros as 2.5 ms frames cannot actually use all 1275 octets. The largest packet consisting of entirely useful data is (15,326*N - 2) octets. This corresponds to 120 ms of audio encoded as 10 ms frames in either SILK or Hybrid mode, but at a data rate of over 1 Mbps, which makes little sense for the quality achieved. A more reasonable limit is (7,664*N - 2) octets. This corresponds to 120 ms of audio encoded as 20 ms stereo CELT mode frames, with a total bitrate just under 511 kbps (not counting the Ogg encapsulation overhead). For channel mapping family 1, N = 8 provides a reasonable upper bound, as it allows for each of the 8 possible output channels to be decoded from a separate stereo Opus stream. This gives a size of 61,310 octets, which is rounded up to a multiple of 1,024 octets to yield the audio data packet size of 61,440 octets that any implementation is expected to be able to process successfully. 7. Encoder Guidelines When encoding Opus streams, Ogg muxers SHOULD take into account the algorithmic delay of the Opus encoder. In encoders derived from the reference implementation [RFC6716], the number of samples can be queried with ```c opus_encoder_ctl(encoder_state, OPUS_GET_LOOKAHEAD(&delay_samples)); ``` To achieve good quality in the very first samples of a stream, implementations MAY use linear predictive coding (LPC) extrapolation to generate at least 120 extra samples at the beginning to avoid the Opus encoder having to encode a discontinuous signal. For more information on linear prediction, see [LINEAR-PREDICTION]. For an input file containing ‘length’ samples, the implementation SHOULD set the ‘pre-skip’ header value to (delay_samples + extra_samples), encode at least (length + delay_samples + extra_samples) samples, and set the granule position of the last page to (length + delay_samples + extra_samples). This ensures that the encoded file has the same duration as the original, with no time offset. The best way to pad the end of the stream is to also use LPC extrapolation, but zero-padding is also acceptable. 7.1. LPC Extrapolation The first step in LPC extrapolation is to compute linear prediction coefficients [LPC-SAMPLE]. When extending the end of the signal, order-N (typically with N ranging from 8 to 40) LPC analysis is performed on a window near the end of the signal. The last N samples are used as memory to an infinite impulse response (IIR) filter. The filter is then applied on a zero input to extrapolate the end of the signal. Let ‘a(k)’ be the kth LPC coefficient and ‘x(n)’ be the nth sample of the signal. Each new sample past the end of the signal is computed as \[ x(n) = \sum_{k=1}^{N} a(k) x(n-k) \] The process is repeated independently for each channel. It is possible to extend the beginning of the signal by applying the same process backward in time. When extending the beginning of the signal, it is best to apply a "fade in" to the extrapolated signal, e.g., by multiplying it by a half-Hanning window [HANNING]. 7.2. Continuous Chaining In some applications, such as Internet radio, it is desirable to cut a long stream into smaller chains, e.g., so the comment header can be updated. This can be done simply by separating the input streams into segments and encoding each segment independently. The drawback of this approach is that it creates a small discontinuity at the boundary due to the lossy nature of Opus. A muxer MAY avoid this discontinuity by using the following procedure: 1. Encode the last frame of the first segment as an independent frame by turning off all forms of inter-frame prediction. De-emphasis is allowed. 2. Set the granule position of the last page to a point near the end of the last frame. 3. Begin the second segment with a copy of the last frame of the first segment. 4. Set the ‘pre-skip’ value of the second stream in such a way as to properly join the two streams. 5. Continue the encoding process normally from there, without any reset to the encoder. In encoders derived from the reference implementation, inter-frame prediction can be turned off by calling ``` opus_encoder_ctl(encoder_state, OPUS_SET_PREDICTION_DISABLED(1)); ``` For best results, this implementation requires that prediction be explicitly enabled again before resuming normal encoding, even after a reset. 8. Security Considerations Implementations of the Opus codec need to take appropriate security considerations into account, as outlined in [RFC4732]. This is just as much a problem for the container as it is for the codec itself. Malicious payloads and/or input streams can be used to attack codec implementations. Implementations MUST NOT overrun their allocated memory nor consume excessive resources when decoding payloads or processing input streams. Although problems in encoding applications are typically rarer, this still applies to a muxer, as vulnerabilities would allow an attacker to attack transcoding gateways. Header parsing code contains the most likely area for potential overruns. It is important for implementations to ensure their buffers contain enough data for all of the required fields before attempting to read it (for example, for all of the channel map data in the ID header). Implementations would do well to validate the indices of the channel map, also, to ensure they meet all of the restrictions outlined in Section 5.1.1, in order to avoid attempting to read data from channels that do not exist. To avoid excessive resource usage, we advise implementations to be especially wary of streams that might cause them to process far more data than was actually transmitted. For example, a relatively small comment header may contain values for the string lengths or user comment list length that imply that it is many gigabytes in size. Even computing the size of the required buffer could overflow a 32-bit integer, and actually attempting to allocate such a buffer before verifying it would be a reasonable size is a bad idea. After reading the user comment list length, implementations might wish to verify that the header contains at least the minimum amount of data for that many comments (4 additional octets per comment, to indicate each has a length of zero) before proceeding any further, again taking care to avoid overflow in these calculations. If allocating an array of pointers to point at these strings, the size of the pointers may be larger than 4 octets, potentially requiring a separate overflow check. Another bug in this class we have observed more than once involves the handling of invalid data at the end of a stream. Often, implementations will seek to the end of a stream to locate the last timestamp in order to compute its total duration. If they do not find a valid capture pattern and Ogg page from the desired logical stream, they will back up and try again. If care is not taken to avoid re-scanning data that was already scanned, this search can quickly devolve into something with a complexity that is quadratic in the amount of invalid data. In general, when seeking, implementations will wish to be cautious about the effects of invalid granule position values and ensure all algorithms will continue to make progress and eventually terminate, even if these are missing or out of order. Like most other container formats, Ogg Opus streams SHOULD NOT be used with insecure ciphers or cipher modes that are vulnerable to known-plaintext attacks. Elements such as the Ogg page capture pattern and the ‘magic signature’ fields in the ID header and the comment header all have easily predictable values, in addition to various elements of the codec data itself. 9. Content Type An "Ogg Opus file" consists of one or more sequentially multiplexed segments, each containing exactly one Ogg Opus stream. The RECOMMENDED mime-type for Ogg Opus files is "audio/ogg". If more specificity is desired, one MAY indicate the presence of Opus streams using the codecs parameter defined in [RFC6381] and [RFC5334], e.g., ```plaintext audio/ogg; codecs=opus ``` for an Ogg Opus file. The RECOMMENDED filename extension for Ogg Opus files is ".opus". When Opus is concurrently multiplexed with other streams in an Ogg container, one SHOULD use one of the "audio/ogg", "video/ogg", or "application/ogg" mime-types, as defined in [RFC5334]. Such streams are not strictly "Ogg Opus files" as described above, since they contain more than a single Opus stream per sequentially multiplexed segment, e.g., video or multiple audio tracks. In such cases, the '.opus' filename extension is NOT RECOMMENDED. In either case, this document updates [RFC5334] to add "opus" as a codecs parameter value with char[8]: ’OpusHead’ as Codec Identifier. 10. IANA Considerations Per this document, IANA has updated the "Media Types" registry by adding .opus as a file extension for "audio/ogg" and adding itself as a reference alongside [RFC5334] for "audio/ogg", "video/ogg", and "application/ogg" Media Types. This document defines a new registry "Opus Channel Mapping Families" to indicate how the semantic meanings of the channels in a multi- channel Opus stream are described. IANA has created a new namespace of "Opus Channel Mapping Families". This registry is listed on the IANA Matrix. Modifications to this registry follow the "Specification Required" registration policy as defined in [RFC5226]. Each registry entry consists of a Channel Mapping Family Number, which is specified in decimal in the range 0 to 255, inclusive, and a Reference (or list of references). Each Reference must point to sufficient documentation to describe what information is coded in the Opus identification header for this channel mapping family, how a demuxer determines the stream count ('N') and coupled stream count ('M') from this information, and how it determines the proper interpretation of each of the decoded channels. This document defines three initial assignments for this registry. <table> <thead> <tr> <th>Value</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>RFC 7845, Section 5.1.1.1</td> </tr> <tr> <td>1</td> <td>RFC 7845, Section 5.1.1.2</td> </tr> <tr> <td>255</td> <td>RFC 7845, Section 5.1.1.3</td> </tr> </tbody> </table> The designated expert will determine if the Reference points to a specification that meets the requirements for permanence and ready availability laid out in [RFC5226] and whether it specifies the information described above with sufficient clarity to allow interoperable implementations. 11. References 11.1. Normative References Terriberry, et al. Standards Track [Page 32] 11.2. Informative References DOI 10.17487/RFC4732, December 2006, [RFC7587] Spittka, J., Vos, K., and JM. Valin, "RTP Payload Format for the Opus Speech and Audio Codec", RFC 7587, DOI 10.17487/RFC7587, June 2015, [FLAC] Coalson, J., "FLAC – Free Lossless Audio Codec Format Description", January 2008, https://xiph.org/flac/format.html. [HANNING] Wikipedia, "Hann window", February 2016, [LINEAR-PREDICTION] Wikipedia, "Linear Predictive Coding", October 2015, [LPC-SAMPLE] Degener, J. and C. Bormann, "Autocorrelation LPC coefficient generation algorithm (Vorbis source code)", November 1994, [Q-NOTATION] Wikipedia, "Q (number format)", December 2015, [REPLAY-GAIN] Parker, C. and M. Leese, "VorbisComment: Replay Gain", June 2009, https://wiki.xiph.org/VorbisComment#Replay_Gain. [SEEKING] Pfeiffer, S., Parker, C., and G. Maxwell, "Granulepos Encoding and How Seeking Really Works", May 2012, Acknowledgments Thanks to Ben Campbell, Joel M. Halpern, Mark Harris, Greg Maxwell, Christopher "Monty" Montgomery, Jean-Marc Valin, Stephan Wenger, and Mo Zanaty for their valuable contributions to this document. Additional thanks to Andrew D’Addesio, Greg Maxwell, and Vincent Penquerc’h for their feedback based on early implementations. Authors’ Addresses Timothy B. Terriberry Mozilla Corporation 331 E. Evelyn Ave. Mountain View, CA 94041 United States Phone: +1 650 903-0800 Email: tterribe@xiph.org Ron Lee Voicetronix 246 Pulteney Street, Level 1 Adelaide, SA 5000 Australia Phone: +61 8 8232 9112 Email: ron@debian.org Ralph Giles Mozilla Corporation 163 West Hastings Street Vancouver, BC V6B 1H5 Canada Phone: +1 778 785 1540 Email: giles@xiph.org
{"Source-Url": "http://potaroo.net/ietf/rfc/PDF/rfc7845.pdf", "len_cl100k_base": 14581, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 69756, "total-output-tokens": 17612, "length": "2e13", "weborganizer": {"__label__adult": 0.0004792213439941406, "__label__art_design": 0.0012445449829101562, "__label__crime_law": 0.0007724761962890625, "__label__education_jobs": 0.0014581680297851562, "__label__entertainment": 0.0012378692626953125, "__label__fashion_beauty": 0.00019073486328125, "__label__finance_business": 0.00048613548278808594, "__label__food_dining": 0.0003690719604492187, "__label__games": 0.001979827880859375, "__label__hardware": 0.0101318359375, "__label__health": 0.00029921531677246094, "__label__history": 0.0003724098205566406, "__label__home_hobbies": 0.00013184547424316406, "__label__industrial": 0.0007076263427734375, "__label__literature": 0.00046443939208984375, "__label__politics": 0.0005002021789550781, "__label__religion": 0.0005984306335449219, "__label__science_tech": 0.2083740234375, "__label__social_life": 0.00011962652206420898, "__label__software": 0.1146240234375, "__label__software_dev": 0.65478515625, "__label__sports_fitness": 0.0003659725189208984, "__label__transportation": 0.00034332275390625, "__label__travel": 0.0001786947250366211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63854, 0.05274]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63854, 0.42774]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63854, 0.87005]], "google_gemma-3-12b-it_contains_pii": [[0, 1445, false], [1445, 2451, null], [2451, 3873, null], [3873, 5452, null], [5452, 7908, null], [7908, 10143, null], [10143, 12725, null], [12725, 15019, null], [15019, 17251, null], [17251, 19638, null], [19638, 22342, null], [22342, 24266, null], [24266, 25948, null], [25948, 27988, null], [27988, 30141, null], [30141, 32174, null], [32174, 34348, null], [34348, 35898, null], [35898, 37877, null], [37877, 39667, null], [39667, 41144, null], [41144, 41431, null], [41431, 42402, null], [42402, 44755, null], [44755, 46715, null], [46715, 49473, null], [49473, 51644, null], [51644, 53475, null], [53475, 55894, null], [55894, 57967, null], [57967, 60006, null], [60006, 61639, null], [61639, 63084, null], [63084, 63426, null], [63426, 63854, null], [63854, 63854, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1445, true], [1445, 2451, null], [2451, 3873, null], [3873, 5452, null], [5452, 7908, null], [7908, 10143, null], [10143, 12725, null], [12725, 15019, null], [15019, 17251, null], [17251, 19638, null], [19638, 22342, null], [22342, 24266, null], [24266, 25948, null], [25948, 27988, null], [27988, 30141, null], [30141, 32174, null], [32174, 34348, null], [34348, 35898, null], [35898, 37877, null], [37877, 39667, null], [39667, 41144, null], [41144, 41431, null], [41431, 42402, null], [42402, 44755, null], [44755, 46715, null], [46715, 49473, null], [49473, 51644, null], [51644, 53475, null], [53475, 55894, null], [55894, 57967, null], [57967, 60006, null], [60006, 61639, null], [61639, 63084, null], [63084, 63426, null], [63426, 63854, null], [63854, 63854, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63854, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63854, null]], "pdf_page_numbers": [[0, 1445, 1], [1445, 2451, 2], [2451, 3873, 3], [3873, 5452, 4], [5452, 7908, 5], [7908, 10143, 6], [10143, 12725, 7], [12725, 15019, 8], [15019, 17251, 9], [17251, 19638, 10], [19638, 22342, 11], [22342, 24266, 12], [24266, 25948, 13], [25948, 27988, 14], [27988, 30141, 15], [30141, 32174, 16], [32174, 34348, 17], [34348, 35898, 18], [35898, 37877, 19], [37877, 39667, 20], [39667, 41144, 21], [41144, 41431, 22], [41431, 42402, 23], [42402, 44755, 24], [44755, 46715, 25], [46715, 49473, 26], [49473, 51644, 27], [51644, 53475, 28], [53475, 55894, 29], [55894, 57967, 30], [57967, 60006, 31], [60006, 61639, 32], [61639, 63084, 33], [63084, 63426, 34], [63426, 63854, 35], [63854, 63854, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63854, 0.0322]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
a2e7bed6c9fda2336e9a2c67cc084070b43a0bd5
A Study and Toolkit for Asynchronous Programming in C# Semih Okur, David L. Hartveld, Danny Dig and Arie van Deursen Report TUD-SERG-2013-016 A Study and Toolkit for Asynchronous Programming in C# Semih Okur1, David L. Hartveld2, Danny Dig3, Arie van Deursen2 1University of Illinois 2Delft University of Technology 3Oregon State University okur2@illinois.edu d.l.hartveld@student.tudelft.nl digd@eecs.oregonstate.edu arie.vandeursen@tudelft.nl ABSTRACT Asynchronous programming is in demand today, because responsiveness is increasingly important on all modern devices. Yet, we know little about how developers use asynchronous programming in practice. Without such knowledge, developers, researchers, language and library designers, and tool vendors can make wrong assumptions. We present the first study that analyzes the usage of asynchronous programming in a large experiment. We analyzed 1378 open source Windows Phone (WP) apps, comprising 12M SLOC, produced by 3376 developers. Using this data, we answer 2 research questions about use and misuse of asynchronous constructs. Inspired by these findings, we developed (i) Asyncifier, an automated refactoring tool that converts callback-based asynchronous code to the new async/await; (ii) Corrector, a tool that finds and corrects common misuses of async/await. Our empirical evaluation shows that these tools are (i) applicable and (ii) efficient. Developers accepted 313 patches generated by our tools. 1. INTRODUCTION User interfaces are usually designed around the use of a single user interface (UI) event thread [16, 17, 24, 25]: every operation that modifies UI state is executed as an event on that thread. The UI “freezes” when it cannot respond to input, or when it cannot be redrawn. It is recommended that long-running CPU-bound or blocking I/O operations execute asynchronously so that the application (app) continues to respond to UI events. Asynchronous programming is in demand today because responsiveness is increasingly important on all modern devices: desktop, mobile, or web apps. Therefore, major programming languages have APIs that support non-blocking, asynchronous operations (e.g., to access the web, or for file operations). While these APIs make asynchronous programming possible, they do not make it easy. Asynchronous APIs rely on callbacks. However, callbacks invert the control flow, are awkward, and obfuscate the intent of the original synchronous code [38]. Recently, major languages (F# [38], C# and Visual Basic [8] and Scala [7]) introduced async constructs that resemble the straightforward coding style of traditional synchronous code. Thus, they recognize asynchronous programming as a first-class citizen. Yet, we know little about how developers use asynchronous programming and specifically the new async constructs in practice. Without such knowledge, other developers cannot educate themselves about the state of the practice, language and library designers are unaware of any misuse, researchers make wrong assumptions, and tool vendors do not provide the tools that developers really need. This knowledge is also important as a guide to designers of other major languages (e.g., Java) planning to support similar constructs. Hence, asynchronous programming deserves first-class citizenship in empirical research and tool support, too. We present the first study that analyzes the usage of asynchronous libraries and new language constructs, async/await in a large experiment. We analyzed 1378 open source Windows Phone (WP) apps, comprising 12M SLOC, produced by 3376 developers. While all our empirical analysis and tools directly apply to any platform app written in C# (e.g., desktop, console, web, tablet), in this paper we focus on the Windows Phone platform. We focus on WP apps because we expect to find many exemplars of asynchronous programming, given that responsiveness is critical. Mobile apps can easily be unresponsive because mobile devices have limited resources and have high latency (excessive network accesses). With the immediacy of touch-based UIs, even small hiccups in responsiveness are more obvious and jarring than when using a mouse or keyboard. Some sluggishness might motivate the user to uninstall the app, and possibly submit negative comments in the app store [37]. Moreover, mobile apps are becoming increasingly more important. According to Gartner, by 2016 more than 300 billion apps will be downloaded annually [18]. The goal of this paper is twofold. First, we obtain a deep understanding of the problems around asynchronous programming. Second, we present a toolkit (2 tools) to address exactly these problems. To this end, we investigate 1378 WP apps through tools and by hand, focusing on the following research questions: RQ1: How do developers use asynchronous programming? RQ2: To what extent do developers misuse async/await? We found that developers heavily use callback-based async- chronous idioms. However, Microsoft no longer officially recommends these asynchronous idioms [30] and has started to replace them with new idioms in new libraries (e.g., WinRT). Developers need to refactor callback-based idioms to new idioms that can take advantage of the async/await language constructs. The changes that the refactoring requires are non-trivial, though. For instance, developers have to inspect deep call graphs. Furthermore, they need to be extra careful to preserve the exception-handling. Thus, we implemented the refactoring as an automated tool, Asyncifier. We also found that nearly half of WPS apps have started to use the 9-month-old async/await keywords. However, developers misuse async/await in various ways. We define misuse as anti-patterns, which hurt performance and might cause serious problems like deadlocks. For instance, we found that 14% of methods that use (the expensive) async/await do this unnecessarily, 19% of methods do not follow an important good practice [22], 1 out of 5 apps miss opportunities in async methods to increase asynchronicity, and developers (almost) always unnecessarily capture context, hurting performance. Thus, we implemented a transformation tool, Corrector, that finds and corrects the misuse async/await. This paper makes the following contributions: **Empirical Study:** To the best of our knowledge, this is the first large-scale empirical study to answer questions about asynchronous programming and new language constructs, async/await, that will be available soon in other major programming languages. We present implications of our findings from the perspective of four main audiences: developers, language and library designers, researchers, and tool vendors. **Toolkit:** We implemented the analysis and transformation algorithms to address the challenges (Asyncifier and Corrector). **Evaluation:** We evaluated our tools by using the code corpus and applied the tools hundreds of times. We show that our tools are highly applicable and efficient. Developers find our transformations useful. Using Asyncifier, we applied and reported refactorings in 10 apps. 9 replied and accepted each one of our 28 refactorings. Using Corrector, we found and reported misuses in 19 apps. 18 replied and accepted each of our 285 patches. **Outreach:** Because developers learn new language constructs through both positive and negative examples, we designed a website, http://learnAsync.NET/, to show hundreds of such usages of asynchronous idioms and async/await keywords. ## 2. BACKGROUND When a button click event handler executes a synchronous long-running CPU-bound or blocking I/O operation, the user interface will freeze because the UI event thread cannot respond to events. Code listing 1 shows an example of such an event handler, method Button_Click. It uses the GetFromUrl method to download the contents of a URL, and place it in a text box. Because GetFromUrl is waiting for the network operation to complete, the UI event thread is blocked, and the UI is unresponsive. Keeping UIs responsive thus means keeping the UI event thread free of those long-running or blocking operations. If these operations are executed asynchronously in the background, the foreground UI event thread does not have to busy-wait for completion of the operations. That frees the UI event thread to respond to user input, or redraw the UI: the user will experience the UI to be responsive. CPU-bound operations can be executed asynchronously by (i) explicitly creating threads, or (ii) by reusing a thread from the thread pool. I/O operations are more complicated to offload asynchronously. The naive approach would be to just start another thread to run the synchronous operation asynchronously, using the same mechanics as used for CPU-bound code. However, that would still block the new thread, which consumes significant resources, hurting scalability. The solution is to use asynchronous APIs provided by the platform. The .NET framework mainly provides two models for asynchronous programming: (1) the Asynchronous Programming Model (APM), that uses callbacks, and (2) the Task Asynchronous Pattern (TAP), that uses tasks, which are similar to the concept of futures found in many other languages such as Java, Scala or Python. ### 2.1 Asynchronous Programming Model APM, the Asynchronous Programming Model, was part of the first version of the .NET framework, and has been in existence for 10 years. APM asynchronous operations are started with a Begin method invocation. The result is obtained with an End method invocation. In Code listing 2, BeginGetResponse is such a Begin method, and EndGetResponse is an End method. BeginGetResponse is used to initiate an asynchronous HTTP GET request. The .NET framework starts the I/O operation in the background (in this case, sending the request to the remote web server). Control is returned to the calling method, which can then continue to do something else. When the server responds, the .NET framework will “call back” to the application to notify that the response is ready. EndGetResponse is then used in the callback code to retrieve the actual result of the operation. See Figure 1 for an illustration of this flow of events. The APM Begin method has two pattern-related parameters. The first parameter is the callback delegate (which is a managed, type-safe equivalent of a function pointer). It can be defined as either a method reference, or a lambda expression. The second parameter allows the developer to pass any single object reference to the callback, and is called state. The .NET framework will execute the callback delegate on the thread pool once the asynchronous background operation completes. The EndGetResponse method is then used in the callback to obtain the result of the operation, the actual WebResponse. Note a subtle difference between the synchronous, sequential example in Code listing 1 and the asynchronous, APM- --- **Code 1 Synchronous example** ```csharp 1 void Button_Click(...) { 2 string contents = GetFromUrl(url); 3 textBox.Text = contents; 4 string GetFromUrl(string url) { 5 WebRequest request = WebRequest.Create(url); 6 WebResponse response = request.GetResponse(); 7 Stream stream = response.GetResponseStream(); 8 return stream.ReadToEnd(); 9 } 10 } ``` --- **Code 2 Asynchronous example** ```csharp 1 BeginGetResponse(request, url, EndGetResponse); 2 BeginGetResponse(request, url, EndGetResponse); 3 BeginGetResponse(request, url, EndGetResponse); 4 BeginGetResponse(request, url, EndGetResponse); 5 BeginGetResponse(request, url, EndGetResponse); 6 BeginGetResponse(request, url, EndGetResponse); 7 BeginGetResponse(request, url, EndGetResponse); 8 BeginGetResponse(request, url, EndGetResponse); 9 BeginGetResponse(request, url, EndGetResponse); 10 BeginGetResponse(request, url, EndGetResponse); ``` --- **Code 3 Asynchronous refactoring** ```csharp 1 async Task GetFromUrlAsync(string url) { 2 try { 3 var request = await Task.Factory.StartNew(() => 4 WebRequest.Create(url).GetResponse()); 5 var response = await request.GetResponseAsync(); 6 var stream = await response.GetResponseStreamAsync(); 7 return await stream.ReadToEndAsync(); 8 } catch (Exception e) { 9 Debug.WriteLine(e.Message); 10 } 11 } 12 string Button_Click(...) { 13 var contents = GetFromUrlAsync(url); 14 textBox.Text = contents; 15 return contents; 16 } ``` --- **Code 4 Asynchronous refactoring** ```csharp 1 async Task GetFromUrlAsync(string url) { 2 try { 3 var request = await Task.Factory.StartNew(() => 4 WebRequest.Create(url).GetResponse()); 5 var response = await request.GetResponseAsync(); 6 var stream = await response.GetResponseStreamAsync(); 7 return await stream.ReadToEndAsync(); 8 } catch (Exception e) { 9 Debug.WriteLine(e.Message); 10 } 11 } 12 string Button_Click(...) { 13 var contents = GetFromUrlAsync(url); 14 textBox.Text = contents; 15 return contents; 16 } ``` Code 2 APM-based example 1 void Button_Click(...) { 2 GetFromUrl(url); 3 } 4 void GetFromUrl(string url) { 5 ... of the rest of the method in the same context as when the asynchronous operation was called. For example, if the 6 } 7 } 8 void Callback(IAsyncResult asyncResult) { 9 var request = (WebRequest)asyncResult.AsyncState; 10 var response = request.EndGetResponse(asyncResult); 11 var stream = response.GetResponseStream(); 12 var content = stream.ReadAsString(); 13 Dispatcher.BeginInvoke(() => { 14 textBox.Text = content; 15 }); 16 } Figure 1: Where is callback-based APM code executing? based example in Code listing 2. In the synchronous example, the Button_Click method contains the UI update (setting the download result as contents of the text box). However, in the asynchronous example, the final callback contains an invocation of Dispatcher.BeginInvoke(...) to change context from the thread pool to the UI event thread. 2.2 Task-based Asynchronous Pattern TAP, the Task-based Asynchronous Pattern, provides for a slightly different approach. TAP methods have the same base operation name as APM methods, without ‘Begin’ or ‘End’ prefixes, and instead have an ‘Async’ suffix. The API consists of methods that start the background operation and return a Task object. The Task represents the operation in progress, and its future result. The Task can be (1) queried for the status of the operation, (2) synchronized upon to wait for the result of the operation, or (3) set up with a continuation that resumes in the background when the task completes (similar to the callbacks in the APM model). 2.3 Drawbacks of APM and plain TAP Using APM and plain TAP directly has two main drawbacks. First, the code that must be executed after the asynchronous operation is finished, must be passed explicitly to the Begin method invocation. For APM, even more scaffolding is required: The End method must be called, and that usually requires the explicit passing and casting of an ‘async state’ object instance - see Code listing 2, lines 9-10. Second, even though the Begin method might be called from the UI event thread, the callback code is executed on a thread pool thread. To update the UI after completion of the asynchronous operation from the thread pool thread, an event must be sent to the UI event thread explicitly - see Code listing 2, line 13-15. 2.4 Pause’n’play with async & await To solve this problem, the async and await keywords have been introduced. When a method has the async keyword modifier in its signature, the await keyword can be used to define pausing points. When a Task is awaited in an await expression, the current method is paused and control is returned to the caller. When the awaited Task’s background operation is completed, the method is resumed from right after the await expression. Code listing 3 shows the TAP- & async/await-based equivalent of Code listing 2, and Figure 2 illustrates its flow of execution. The code following the await expression can be considered a continuation of the method, exactly like the callback that needs to be supplied explicitly when using APM or plain TAP. Methods that have the async modifier will thus run synchronously up to the first await expression (and if it does not have any, it will complete synchronously). Merely adding the async modifier does not magically make a method be asynchronously executed in the background. Figure 2: Where is the async/await code executing? 2.5 Where is the code executing? There is one important difference between async/await continuations, and APM or plain TAP callback continuations: APM and plain TAP always execute the callback on a thread pool thread. The programmer needs to explicitly schedule a UI event to interface with the UI, as shown in Code listing 2 and Figure 1. In async/await continuations, the await keyword, by default, captures information about the thread in which it is executed. This captured context is used to schedule execution of the rest of the method in the same context as when the asynchronous operation was called. For example, if the **A Study and Toolkit for Asynchronous Programming in C#** **SERG** **await** keyword is encountered in the UI event thread, it will capture that fact. Once the background operation is completed, the continuation of the rest of the method is scheduled back onto the UI event thread. This behavior allows the developer to write asynchronous code in a sequential manner. See Code listing 3 for an example. Comparing the code examples in listings 1 and 3 will show that the responsive version based on TAP & async/await only slightly differs from the sequential version. It is readable in a similar fashion, and even the UI update (setting the contents of the text box) is back at its original place. By default, **await** expressions capture the current context. However, it is not always needed to make the expensive context switch back to the original context. To forestall a context switch, an **await**ed **Task** can be set to ignore capturing the current context by using `ConfigureAwait(false)`. In Code listing 3, in `GetFromUrlAsync`, none of the statements following the **await** expressions require access to the UI. Hence, the **await**ed **Task** is set with `ConfigureAwait(false)`. In `Button_Click`, the statement following **await** `GetFromUrlAsync(url)` does need to update the UI. So that **await** expression should capture the original context, and the task should not be set up with `ConfigureAwait(false). ### 3. RESEARCH QUESTIONS We are interested in assessing the usage of state of the art asynchronous programming in real world WP apps. #### 3.1 Methodology **Corpus of Data**: We chose Microsoft’s CodePlex [11] and GitHub [19] as sources of the code corpus of WP apps. According to a recent study [27], most C# apps reside in these two repositories. We developed WPCollector to create our code corpus. It is available online [10] for reuse by other researchers. We used WPCollector to download all recently updated WP apps which have a WP-related signature in their project files. It ignores (1) apps without commits since 2012, and (2) apps with less than 500 non-comment, non-blank lines of code (SLOC). The latter “toy apps” are not representative of production code. WPCollector makes as many projects compilable as possible (e.g. by resolving-installing dependencies), because the Roslyn APIs that we rely on (see Analysis Infrastructure) require compilable source code. WPCollector successfully downloaded and prepared 1378 apps, comprising 12M SLOC, produced by 3376 developers. We used Microsoft’s recently released Roslyn [31] SDK, which provides an API for syntactic and semantic program analysis, AST transformations and editor services in Visual Studio. Because the publicly available version of Roslyn is incomplete and does not support the `async/await` keywords yet, we used an internal build obtained from Microsoft. We executed AsyncAnalyzer over each app in our corpus. ### Table 1: Usage of asynchronous idioms. The three columns per platform show the total number of idiom instances, the total number of apps with instances of the idiom, and the percentage of apps with instances of the idiom. <table> <thead> <tr> <th>Idiom</th> <th>WP7 (# App App%)</th> <th>WP8 (# App App%)</th> </tr> </thead> <tbody> <tr> <td>I/O APM</td> <td>1288 242 22%</td> <td>1871 38 20%</td> </tr> <tr> <td>I/O TAP</td> <td>123 23 2%</td> <td>269 57 16%</td> </tr> <tr> <td>New Thread</td> <td>183 92 8%</td> <td>28 24 7%</td> </tr> <tr> <td>BG Worker</td> <td>149 73 6%</td> <td>11 6 2%</td> </tr> <tr> <td>ThreadPool</td> <td>386 103 9%</td> <td>52 24 7%</td> </tr> <tr> <td>New Task</td> <td>51 11 1%</td> <td>182 28 8%</td> </tr> </tbody> </table> For each of these apps, it inspects the version from the main development branch as of August 1st, 2013. We developed a specific analysis to answer each research question. #### 3.2 How do developers use asynchronous programming? **Asynchronous APIs**: We detected all APM and TAP methods that are used in our code corpus as shown in Table 1. Because in WP7 apps, TAP methods are only accessible via additional libraries, Table 1 tabulates the usage statistics for WP7 and WP8 apps separately. The data shows that APM is more popular than TAP for both WP7 and WP8. We also manually inspected all APM and TAP methods used and categorized them based on the type of I/O operations: network (1012), file system (310), database (145), user interaction (102) and other I/O (e.g. speech recognition) (68). We found that asynchronous operations are most commonly used for network operations. There are two ways to offload CPU-bound operations to another thread: by creating a new thread, or by reusing threads from the thread pool. Based on C# books and references [1], we distinguish 3 different approaches developers use to access the thread pool: (1) the BackgroundWorker class, (2) accessing the ThreadPool directly, and (3) creating Tasks. Table 1 tabulates the usage statistics of all these approaches. Because Task is only available in WP7 apps by using additional libraries, the table shows separate statistics for WP7 and WP8 apps. The data shows that Task is used significantly more in WP8 apps, most likely because of availability in the core platform. **Language Constructs**: `async/await` have become accessible for WP development in last quarter of 2012. While they are available by default in WP8, WP7 apps have to reference Microsoft.Bcl.Async library to use them. We found that 45% (157) of WP8 apps use `async/await` keywords. While nearly half of all WP8 apps have started to use `async/await`, only 10 WP7 apps use them. *Callback-based APM is the most widely used idiom. While nearly half of all WP8 apps have started to use `async/await`, only 10 WP7 apps use them.* 3.3 Do developers misuse async/await? Because async/await are relatively new language constructs, we have also investigated how developers misuse these constructs. We define misuse as anti-patterns which hurt performance and might cause serious problems like deadlocks. We detected the following typical misuse idioms. 3.3.1 Fire & Forget methods 799 of 2382 async/await methods are “fire&forget”, which return void. Unless a method is only called as a UI event handler, it must be awaitable. Otherwise, it is a code smell because it complicates control flow and makes error detection & correction difficult. Exceptions in fire&forget methods cannot be caught in the calling method, causing termination of the app. Instead, they should return Task which does not force the method to return anything; but it enables easier error-handling, composability, and testability. However, we found that only 339 out of these 799 async void methods are event handlers. It means that 19% of all async methods (400 out of 2383) are not following this important practice [22]. One in five async methods violate the principle that an async method should be awaitable unless it is the top level event handler. 3.3.2 Unnecessary async/await methods Consider the example from “Cimbalino Windows Phone Toolkit” [3]: ```csharp public async Task<Stream> OpenFileForReadAsync(...) { return await Storage.OpenStreamForReadAsync(path); } ``` The OpenStream method is a TAP call, which is awaited in the OpenFile method. However, there is no need to await it. Because there is no statement after the await expression except for the return, the method is paused without reason: the Task that is returned by Storage.OpenStream can be immediately returned to the caller. The snippet below behaves exactly the same as the one above: ```csharp public Task<Stream> OpenFileForReadAsync(...) { return Storage.OpenStreamForReadAsync(path); } ``` It is important to detect this kind of misuse. Adding the await modifier comes at a price: the compiler generates some code in every async method. We discovered that in 26% of the 167 apps, 324 out of all async 2383 methods unnecessarily use async/await. There is no need to use async/await in 14% of all async methods. 3.3.3 Long-running operations under async methods We also noticed that developers use some potentially long running operations under async methods even though there are corresponding asynchronous versions of these methods in .NET or third-party libraries. Consider the following example from indulged-flickr [15], a Flickr: ```csharp public async void GetPhotoStreamAsync(...) { var response = await DispatchRequest(...); using (StreamReader reader = new StreamReader(...)) { string jsonString = reader.ReadToEnd(); } } ``` The developer might use await ReadToEndAsync() instead of the synchronous ReadToEnd call, especially if the stream is expected to be large. In the example below from iRacerMotionControl [23], the situation is more severe. ```csharp private async void BT2Arduino_Send(string WhatToSend) { await BTSock.OutputStream.WriteAsync(datab); txtBTStatus.Text = "sent"; System.Threading.Thread.Sleep(5000); .... } ``` The UI event thread calls BT2Arduino_Send, which blocks the UI thread by busy-waiting for 5 seconds. Instead of using the blocking Thread.Sleep method, the developer should use the non-blocking Task.Delay(5000) method to preserve similar timing behavior, and await it to prevent the UI to freeze for 5 seconds. We found 115 instances of potentially long-running operations in 22% of the 167 apps that use async/await. 1 out of 5 apps miss opportunities in at least one async method to increase asynchronicity. 3.3.4 Unnecessarily capturing context async/await introduce new risks if the context is captured without specifying ConfigureAwait(false). For example, consider the following example from adsclient [2]: ```csharp void GetMessage(byte[] response) { ReceiveAsync(response).Wait(); ... } ``` async Task<bool> ReceiveAsync(byte[] message) { return tcs.Task; } ``` If GetMessage is called from the UI event thread, the thread will wait for completion of ReceiveAsync because of the Wait call. When the await completes in ReceiveAsync, it attempts to execute the remainder of the method within the captured context, which is the UI event thread. However, the UI event thread is already blocked, waiting for the completion of ReceiveAsync. Therefore, a deadlock occurs. To prevent the deadlock, the developer needs to set up the await expression to use ConfigureAwait(false). Instead of attempting to resume the ReceiveAsync method on the UI event thread, it now resumes on the thread pool, and the blocking wait in GetMessage does not cause a deadlock any more. In the example above, although ConfigureAwait(false) is a solution, we fixed it by removing await because it was also an instance of unnecessary async/await use. The developer of the app accepted our fix as a patch. We found 5 different cases for this type of deadlock which can happen if the caller method executes on UI event thread. Capturing the context can also cause another problem: it hurts performance. As asynchronous GUI applications grow larger, there can be many small parts of async methods all using the UI event thread as their context. This can cause sluggishness as responsiveness suffers from thousands of paper cuts. It also enables a small amount of parallelism: some asynchronous code can run in parallel with the UI event thread instead of constantly badgering it with bits of work to do. To mitigate these problems, developers should await the Task with ConfigureAwait(false) whenever they can. If the statements after the await expression do not update the UI, ConfigureAwait(false) must be set. Detecting this misuse is important because using ConfigureAwait(false) might prevent future bugs like deadlocks and improve the performance. 1786 out of 2383 async methods do not update GUI elements in their call graph after await expressions. We found that ConfigureAwait(false) is used in only 16 out of these 1786 async methods in await expressions. All 1770 other async methods should have used ConfigureAwait(false). 99% of the time, developers did not use ConfigureAwait(false) where this was needed. <table> <thead> <tr> <th>Misuse</th> <th>#</th> <th>Method%</th> <th>App%</th> </tr> </thead> <tbody> <tr> <td>(1)Fire&amp;Forget</td> <td>460</td> <td>19%</td> <td>76%</td> </tr> <tr> <td>(2)Unnecces. Async</td> <td>324</td> <td>14%</td> <td>26%</td> </tr> <tr> <td>(3)Potential LongRunning</td> <td>115</td> <td>5%</td> <td>22%</td> </tr> <tr> <td>(4)Unnecces. Context</td> <td>1770</td> <td>74%</td> <td>86%</td> </tr> </tbody> </table> **4. TOOLKIT** Based on our findings, we developed a two-fold approach to support the developer: (1) Asyncifier, a refactoring tool to upgrade legacy callback-based APM code to take advantage of async/await construct (see section 4.1) and (2) Corrector, a tool for detecting and fixing misuses of async/await in code (see Section 4.2). **4.1 Refactoring APM to async & await** **4.1.1 Challenges** There are three main challenges that make it hard to execute the refactoring quickly and flawlessly by hand. First, the developer needs to understand if the APM instance is a candidate for refactoring based on the preconditions in Section 4.1.2. Second, he must transform the code while retaining the original behavior of the code - both functionally and in terms of scheduling. This is non-trivial, especially in the presence of (1) exception handling, and (2) APM End methods that are placed deeper in the call graph. **Exception handling** The refactoring from APM to async/await should retain the functional behavior of the original program, both in the normal case and under exceptional circumstances. In 52% of all APM instances, try-catch blocks are in place to handle those exceptions. The try-catch blocks surround the End method invocation, which throws an exception if the background operation results in an exceptional circumstance. These catch blocks can contain business logic: for example, a network error sometimes needs to be reported to the user (“Please check the data or WiFi connection”). Code listing 4 shows such an example. The naive approach to introducing async/await is to replace the Begin method invocation with an invocation to the corresponding TAP method, and await the result immediately. However, the await expression is the site that can throw the exception when the background operation failed. Thus, the exception would be thrown at a different site, and this can drastically change behavior. By introducing the await expression as replacement of the End method call at the exact same place, existing exception handling will work exactly as it did before. This is not a non-trivial insight for developers, because online examples of async/await only show the refactoring for extremely simple cases, where this is not a concern. **Hidden End methods** The developer needs to take even more care when the End method is not immediately called in the callback lambda expression, but is ‘hidden’ deeper down the call chain. In that case, the Task instance must be passed down to where the End method invocation was to retain exceptional behavior. This requires an inter-procedural analysis of the code: each of the methods, through which the IAsyncResult ‘flows’, must be refactored, which makes the refactoring more tedious. The developer must trace the call graph of the callback to find the End method call, and in each encountered method: (1) replace the IAsyncResult parameter with a Task<T> parameter (with T being the return type of the TAP method, (2) replace the return type R with async Task<R>, or void with async void or async Task, and (3) introduce ConfigureAwait(false) at each await expression. As shown in the results of the empirical study, when its presence is critical to retain UI responsiveness, developers almost never use ConfigureAwait(false) where it should be used. Code listing 5 shows such an example. **4.1.2 Algorithm precondition** An invocation of a Begin method is a candidate for refactoring to async/await based constructs, if it adheres to the following preconditions and restrictions: **Code 4** EndGetResponse in try-catch block ```csharp void Button_Click(...) { WebRequest request = WebRequest.Create(url); request.BeginGetResponse(Callback, request); } void Callback(IAsyncResult ar) { WebRequest request = (WebRequest)ar.AsyncState; try { var response = request.EndGetResponse(ar); // Do something with successful response. } catch (WebException e) { // Error handling } } ``` **Code 5** EndGetResponse on longer call graph path ```csharp void Button_Click(...) { WebRequest request = WebRequest.Create(url); request.BeginGetResponse(ar => { IntermediateMethod(ar, request); }, ar1); } ``` void IntermediateMethod(IAsyncResult result, WebRequest request) { var response = GetResponse(request, result); // Do something with response } WebResponse GetResponse(WebRequest request, IAsyncResult result) { return request.EndGetResponse(result); } Code 6 Adheres to precondition ```csharp void Action(WebRequest request) { var response = request.GetResponse(asyncResult => { // Do something with response. }, null); } ``` Code 7 Code listing 2 refactored to meet preconditions ```csharp void GetFromUrl(string url) { var request = WebRequest.Create(url); Callback(asyncResult, request); } void Callback(IAsyncResult ar, WebRequest request) { var response = request.EndGetResponse(ar); var stream = response.GetResponseStream(); Dispatcher.BeginInvoke(() => { textBox.Text = content; }); } ``` **P1:** The APM method call must represent an asynchronous operation for which a TAP-based method also exists. Obviously, if the TAP-based method does not exist, the code cannot be refactored. **P2:** The Begin method invocation statement must be contained in a regular method, i.e., not in a lambda expression or delegate anonymous method. The Begin method will be made async. While it is possible to make lambdas and delegate anonymous methods async, this is considered a bad practice because it usually creates an async void fire & forget method (see Section 3.3.1). **P3:** The callback argument must be a lambda expression with a body consisting of a block of statements. The call graph of that block must contain an End method invocation that takes the lambda IAsyncResult parameter as argument. This means that the callback must actually end the background operation. **P4:** In the callback call graph, the IAsyncResult lambda parameter should not be used, except as argument to the End method. **P5:** The state argument must be a null literal. As the IAsyncResult lambda parameter must be unused, its AsyncState property should be unused as well, so the state argument expression of the Begin method invocation should be null. **P6:** In the initiating method (the method containing the Begin method invocation), the IAsyncResult return value of the Begin method should not be used, because it is returned by a method invocation that will disappear. Code listing 6 shows a valid example in the context of these preconditions. Applying these preconditions to APM instances in real-world applications would restrict the number of APM instances that can be refactored. Fortunately, many instances in other forms can be refactored into this form. Code listing 2 shows an example that fails P3 and P5: the callback argument is a method reference, and the state argument is not null. This instance can be refactored into the code shown in listing 7 by applying the “Introduce Parameter” refactoring to the request variable in the original Callback method. Based on encountered cases in the analyzed code corpus, we have identified and (partially) implemented several such refactorings in Asyncifier. Examples are (1) identification of unused state arguments which can be replaced with null (solves violations of P5), and (2) rewriting of some callback argument expressions (solves violations of P3). ### 4.1.3 Refactoring APM instances Asyncifier detects all Begin method invocations that fulfill the preconditions. It takes the following steps to refactor the APM instance to async/await-based constructs. **Traveling the call graph from APM Begin to End** First, Asyncifier explores the call graph of the body of the callback lambda expression to find the invocation path to the End invocation. It does a depth-first search of the call graph, by looking up the symbols of any non-virtual method that is encountered. There are two possible scenarios: the End method invocation (1) is placed directly in the lambda expression, or (2) it is found on the call graph of the lambda body in another method’s body. Code listing 6 is an example of the first case. In the second case, Asyncifier identifies three different methods which are on the call graph path: (1) the initiating method, the method containing the Begin method invocation, (2) the result-obtaining method, the method containing the End method invocation, and (3) intermediate methods, the remaining methods on the path. Code listing 7 is an example of the second case. This example is used in the description of the following steps. **Rewriting the initiating method** In both cases, the initiating method needs to be rewritten. Asyncifier adds the async modifier to the signature of the initiating method. It changes the return value to either Task instead of void, or Task<T> for any other return type T. ```csharp void GetFromUrl(string url) { ... } async Task GetFromUrl(string url) { ... } ``` Asyncifier replaces the Begin method invocation statement with a local variable declaration of a task that is assigned the result of the corresponding TAP method invocation. The parameterized type is the return type of the End method: ```csharp request.BeginGetResponse(...); ``` Then it concatenates the statements in the lambda expression body to the body of the initiating method: ```csharp async Task GetFromUrl(string url) { var request = WebRequest.Create(url); var task = request.GetResponseAsync(); Callback(asyncResult, request); } ``` It replaces the asyncResult lambda parameter reference asyncResult with a reference to the newly declared Task instance. ```csharp async Task GetFromUrl(string url) { var request = WebRequest.Create(url); var task = request.GetResponseAsync(); Callback(task, request); } Code 8 TAP- & async/await-based code after refactoring ```csharp async Task GetFromUrl(string url) { var request = WebRequest.Create(url); Task<WebResponse> task = request.GetResponseAsync(); Callback(task, request); } async Task Callback(Task<WebResponse> task, WebRequest request) { var response = await task.ConfigureAwait(false); var stream = response.GetResponseStream(); var content = stream.ReadAsString(); Dispatcher.BeginInvoke(() => { textBox.Text = content; }); } ``` **Rewriting the result-obtaining method** **Asyncifier** updates the signature of the result-obtaining method as follows: (1) it adds the `async` modifier, (2) it replaces return type `void` with `Task`, or any other `T` with `Task<T>`, and (3) it replaces the `IAsyncResult` parameter with `Task<T>`, with `T` the return type of the `End` method. ```csharp void Callback(IAsyncResult asyncResult, WebRequest request) { ... } ``` Then it replaces the `End` method invocation expression with `await task`, without capturing the synchronization context: ```csharp var response = await request.EndGetResponse(asyncResult); ``` **Asyncifier** refactors the APM instance into the code shown in listing 8. If the introduction of new variables leads to identifier name clashes, **Asyncifier** disambiguates the newly introduced names by appending an increasing number to them, i.e., `task1`, `task2`, etc. **Callbacks containing the End call** If the `End` method invocation is now in the initiating method, **Asyncifier** replaces it with an `await` expression, and the refactoring is complete. The example in Code listing 6 would be completely refactored at this point: ```csharp void Action(WebRequest request) { var task = request.GetResponseAsync(); var response = await task.ConfigureAwait(false); // Do something with response. } ``` **Rewriting intermediate methods** Intermediate methods must be rewritten if the `End` method is not invoked in the callback lambda expression body. **Asyncifier** recursively refactors every method recursively, applying the same steps as for the result-obtaining method. Additionally, at the call site of each method, the reference to the (removed) `result` parameter is replaced with a reference to the (newly introduced) `task` parameter. 4.1.4 Retaining original behavior It is crucial that the refactored code has the same behavior in terms of scheduling as the original code. With both the `Begin` method and the `TAP` method, the asynchronous operation is started. In the `APM` case, the callback is only executed once the background operation is completed. With `async/await`, the same `happens-before` relationship exists between the `await` expression and the statements that follow the `await` of the `Task` returned by the `TAP` method. Because the statements in callbacks are placed after the `await` expression that pauses execution until completion of the background operation, this timing behavior is preserved. 4.1.5 Implementation limitations The set of candidates is restricted by tool limitations related to re-use of `Begin` or `End` methods. First, there should not be other call graph paths leading from `Begin` method call to the target `End` method, which means so much as that the specific `End` method invocation must not be shared between multiple `Begin` invocations. Second, recursion in the callback through another `Begin` call that references the same callback again is not allowed (essentially, this is also sharing of an `End` method call). Third, **Asyncifier** does not support multiple `End` method invocations that correspond to a single `Begin` method invocation, for example through the use of branching. However, this case is very rare. 4.2 Corrector We implemented another tool, **Corrector**, that detects and corrects common misuses that we explained in RQ4. **Corrector** gets the project file as an input and automatically corrects the misuses if it finds any without user intervention. Although this batch mode works to fix present misuses, it does not prevent users to make mistakes. Hence, **Corrector** also supports *Quick Fix* mode for Visual Studio. This mode shows a small icon close to the location of the misuse and offers a transformation to fix the problem, similar to the one in Eclipse. 1) Fire & Forget methods: There is no fix that can be automated for this misuse. If `fire & forget` method is converted to `async Task` method and is awaited in the caller, it will change the semantics. Therefore, the developer’s understanding of code is required to fix these cases. 2) Unnecessary `async/await` methods: **Corrector** checks whether `async` method body has only one `await` keyword and this `await` is used for a `TAP` method call that is the last statement of the method. **Corrector** does not do this for `async void` (`fire&forget`) methods; because if it removes `await` from the last statement in `async void` methods, it will silence the exception that can occur in that statement. To fix these cases, **Corrector** removes the `async` from the method identifiers and the `await` keyword from the `TAP` method call. The method will return the `Task` that is the result of `TAP` method call as shown in the examples of RQ4. 3) Long-running operations under `async methods`: To detect these operations, **Corrector** looks up symbols of each method invocation in the bodies of `async` methods. After getting symbol information, **Corrector** looks at the other members of the containing class of that symbol to check whether there is an asynchronous version. For instance, if there is an `x.Read()` method invocation and `x` is an instance of the `Stream` class, **Corrector** looks at the members of the `Stream` class to see whether there is a `ReadStreamAsync` method that gets the same parameters and returns `Task`. By dynamically checking the members, **Corrector** can also find asynchronous versions not only in the .NET framework but also in third-party libraries. **Corrector** also maps corresponding blocking and non-blocking methods which do not follow the `async` suffix convention (e.g., `Thread.Sleep` -> `Task.Delay`). **Corrector** avoids introducing asynchronous operations of file IO operations in loops, as this could result in slower performance than the synchronous version. After finding the corresponding non-blocking operation, Asyncifier simply replaces the invocation with the new operation and makes it await-ed. 4) Unnecessarily capturing context: Corrector checks whether there is a statement that access a GUI element (read or write) in the call graph of async method. It inspects every object’s symbol if the symbol is from System.Windows or Microsoft.Phone namespaces. All GUI elements are in these namespaces; but all constructs in these namespaces are not GUI elements. It makes our analysis conservative. If Corrector does not find any GUI element access after await points in async methods, it simply puts ConfigureAwait(false) as following TAP calls. Even though it is enough to put ConfigureAwait for one TAP call in the async method, it is good practice to put it for every TAP call in the async methods. 5. EVALUATION 5.1 Quantitative To evaluate the usefulness of Asyncifier and Corrector we answer the following questions by using our code corpus: EQ1: Are they applicable? We executed Asyncifier over our code corpus. After each transformation, Asyncifier compiled the app in-memory and checked whether compilation errors were introduced. 54% of the 1245 APM instances adhere to the preconditions set in section 4.1.2, which were all successfully refactored. By manually checking 10% of all transformed instances, randomly sampled, we verified that Asyncifier refactors APM instances correctly. In the 46% of unsupported APM instances, Asyncifier does not touch the original program. The two main causes for unsuccessful refactorings are (1) instances that do not adhere to preconditions, and (2) tool limitations. The former consist mostly of instances that can not be refactored because of fundamental limitations of the algorithm. Examples are callback expressions that reference a field delegate, or APM End methods that are hidden behind interface implementations (both violations of precondition P3). The latter consist of the examples given in section 4.1.5. We also applied Corrector to the full corpus. All instances of type 2, 3, and 4 misuses were corrected automatically. EQ2: What is the impact of refactoring on code? Asyncifier touches 28.9 lines on average per refactoring. It shows that these refactorings need automation support because they touch many lines of code. Corrector touches one line per each misuse of type (3) and (4) in Section 4.2. It touches 2 or 3 lines per each misuse of type (2); 2.1 lines on average. EQ3: Is the tool efficient? For Asyncifier, the average time needed to refactor one instance is 508ms rendering Asyncifier suitable for an interactive refactoring mode in an IDE. Because the detection and fixing of type (2) and (3) misuses is straightforward, we did not measure the execution time. However, detecting type (4) misuse is expensive, as it requires inspection of the call graph of the async method. We found that analyzing one async method for this misuse takes on average 47ms. This shows that Corrector can be used interactively in an IDE, even for type (4) misuse. 5.2 Qualitative evaluation To further evaluate the usefulness in practice, we identified the 10 most recently updated apps that have APM instances. We applied Asyncifier ourselves, and offered the modifications to the original developers as a patch via a pull request. 9 out of 10 developers responded, and accepted each one of our 28 refactorings. We received very positive feedback on these pull requests. One developer would like to have the tool available right now: “I’ll look forward to the release of that refactoring tool, it seems to be really useful.” The developer of phoneguitartab [4] said that he had “been thinking about replacing all asynchronous calls [with] new async/await style code”. This illustrates the demand for tool support for the refactoring from APM to async/await. For Corrector, we selected the 10 most recently updated apps for all type (2) and (3) misuses. We did not especially select 10 apps for the type (4) misuse; but Corrector did fix this misuse in the selected apps. In total, we selected 19 apps because one app had both type (2) and (3). Developers of 18 apps replied and accepted our all patches, corresponding to 149 instances of type (2), 38 instances of type (3), and 98 instances of type (4) misuses. In total 18 apps accepted 285 instances of Corrector transformation. Response to the fixes that removed unnecessary async/await keywords was similarly positive. One developer pointed out that he missed several unnecessary async/await instances that Corrector detected: “[…] I normally try to take the same minimizing approach, though it seems I missed these.” [32] The developer of SoftBuildData [6] experienced performance improvements after removing unnecessary async/await: “[…] performance has been improved to 28 milliseconds from 49 milliseconds.” Again, these illustrate the need for tools that support the developer in finding problems in the use of async/await. Furthermore, the developer of the playerframework [5] said that they missed the misuses because the particular code was ported from old asynchronous idioms. It demonstrates the need for Asyncifier as it can help a developer to upgrade his or her code, without introducing incorrect usage of async/await. 6. DISCUSSION 6.1 Implications Our study has practical implications for developers, researchers, and language and library designers. Developers learn a new programming construct through both positive and negative examples. Robillard and DeLine [35] study what makes large APIs hard to learn and conclude that one of the important factors is the lack of usage examples. We provide hundreds of real-world examples of all asynchronous idioms on http://LearnAsync.net/. Because developers might need to inspect the whole source file or project to understand the example, we also link to highlighted source files on GitHub [39]. We also provide negative examples anonymously, without giving app names. Language and library designers can learn which constructs and idioms are embraced by developers, and which ones are tedious to use or error-prone. Because some other major languages have plans to introduce similar constructs for asynchronous programming, this first study can guide them to an improved design of similar language constructs for their languages. For instance, capturing the context might not be the default: developers are very likely to forget to use ConfigureAwait(false). Tool vendors can take advantage of our findings on asynccawait misuse. IDEs such as Visual Studio should have built-in quick fixes (similar to ours) to prevent users from introducing misuse. For instance, if developers introduce a fire & forget method, the IDE should give a warning unless the method is the top level event handler. Researchers in the refactoring community can use our findings to target future research. For example, as we see from Table 1, the usage of Task jumped to 8% from 1% in WP8. This calls for work on a tool that converts old asynchronous idioms of CPU-bound computations (e.g. Thread) to new idioms (e.g. Task). 6.2 Threats to Validity Internal: Is there something inherent to how we collect and analyze the usage that could skew the accuracy of our results? First, the study is only focusing on static usage of asynchronous constructs, but one use of a construct (i.e., a call site) could correspond to a large percentage of execution time, making it a very asynchronous program. Likewise, the opposite could be true. However, we are interested in the developer’s view of writing, understanding, maintaining, evolving the code, not on the performance tools’ view of the code (i.e., how much of the total running time is spent in asynchronous code). For our purposes, static analysis is much more appropriate. External: Are the results representative? First, despite the fact that our corpus contains only open source apps, the 1378 apps span a wide domain, from games, social networking, and office productivity to image processing and third party libraries. They are developed by different teams with 3376 contributors from a large and varied community. Our code corpus contains all windows phone apps from GitHub and Codeplex without doing any random sampling or selection. While we answer our research questions for the Windows Phone ecosystem, we expect they can cross the boundary from mobile to any platform written in C# (e.g. desktop, web). Asynchronous programming is similar on those platforms: developers have access to the same asynccawait language constructs, and similar APIs. Reliability: Are our empirical study and evaluation reliable? A detailed description of our results with fine-grained reports are available online. Because we used an internal version of Microsoft’s Roslyn, we had to sign an NDA, which prohibits us from releasing the binaries of any tool using it (AsyncAnalyzer, Asyncifier, and Corrector). We will be able to publish the tools based on a public release that we expect by late Fall ’13. 6.3 Future Work Our study was limited to apps targeting the Windows Phone platform. However, we believe that the tools can also be used for apps targeting other C# platforms, such as desktop, web (ASP.NET) and console apps. Future work would entail a study of asynchronous programming on those platforms similar to the one presented in this paper. The refactoring tool that replaces APM instances with asynccawait-based code has several limitations, as mentioned in section 4.1.5. We plan to remove those limitations, and we expect to be able to show that the success rate of the refactoring tool will increase to 65%. As soon as there is a publicly available version of Roslyn, we plan to update and release all the now-unreleased tools. 7. RELATED WORK Empirical Studies: There are several empirical studies [9, 20, 26, 29] on the usage of libraries or programming language constructs. To the best of our knowledge, there is no empirical study on asynchronous constructs and language constructs for asynchronous programming. We have previously conducted an empirical study [28] on how developers from thousands of open source projects use Microsoft’s Parallel Libraries. There is a small intersection between asynchronous and parallel libraries: only ThreadPool, Task, and Task Parallel constructs. In this paper, we studied these three constructs as 3 of 5 different approaches for asynchronous CPU-bound computations. Refactoring Tools: Traditionally, refactoring tools have been used to improve the design of sequential programs. There are a few refactoring tools that specifically target concurrency. We have used refactoring [13, 14] to retrofit parallelism into sequential applications via concurrent libraries. In the same spirit, Włoka et al. present a refactoring for replacing global state with thread local state [40]. Schafer et al. present Relocker [36], a refactoring tool that lets programmers replace usages of Java built-in locks with more flexible locks. Gyori et al. present LamdaRefactor [21], that refactors existing Java code to use lambda expressions to enable parallelism. To the best of our knowledge, there is no refactoring tool that specifically targets asynchronous programming. In industry, ReSharper is a well-known refactoring tool, but it does not support asynccawait-specific refactorings [34]. Our refactoring helps developer design responsive apps, which is the area never explored so far [12]. 8. CONCLUSION Because responsiveness is very important on mobile devices, asynchronous programming is already a first-class citizen in modern programming environments. However, the empirical research community and tool vendors have not yet similarly embraced it. Our large-scale empirical study of Windows Phone apps provides insight into how developers use asynchronous programming. We have discovered that developers make many mistakes when manually introducing asynchronous programming based on the modern C# language features asynccawait. We provide a toolkit to support developers in preventing and curing these mistakes. Our toolkit (1) safely refactors legacy callback-based asynchronous code to asynccawait, (2) detects and fixes existing errors, and (3) prevents introduction of new errors. Evaluation of the toolkit shows that it is highly applicable, and developers already find the transformations very useful and are looking forward to using our toolkit. We hope that our study motivates other follow-up studies to fully understand the state of the art in asynchronous programming. 9. REFERENCES [38] Don Syme, Tomas Petricek, and Dmitry Lomov. The F# asynchronous programming model. In Proceeding of the 13th international conference on Practical
{"Source-Url": "https://pure.tudelft.nl/portal/files/7392377/TUD_SERG_2013_016.pdf", "len_cl100k_base": 12744, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 47527, "total-output-tokens": 15687, "length": "2e13", "weborganizer": {"__label__adult": 0.00041747093200683594, "__label__art_design": 0.0002460479736328125, "__label__crime_law": 0.0002675056457519531, "__label__education_jobs": 0.0006723403930664062, "__label__entertainment": 5.59687614440918e-05, "__label__fashion_beauty": 0.0001538991928100586, "__label__finance_business": 0.0001404285430908203, "__label__food_dining": 0.0002970695495605469, "__label__games": 0.0005102157592773438, "__label__hardware": 0.0005540847778320312, "__label__health": 0.0003085136413574219, "__label__history": 0.0001760721206665039, "__label__home_hobbies": 7.265806198120117e-05, "__label__industrial": 0.0002160072326660156, "__label__literature": 0.0002218484878540039, "__label__politics": 0.00023424625396728516, "__label__religion": 0.00039768218994140625, "__label__science_tech": 0.00185394287109375, "__label__social_life": 8.308887481689453e-05, "__label__software": 0.003204345703125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00028967857360839844, "__label__transportation": 0.00042057037353515625, "__label__travel": 0.00019466876983642575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64277, 0.03503]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64277, 0.27845]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64277, 0.86637]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 144, false], [144, 144, null], [144, 4954, null], [4954, 13107, null], [13107, 17213, null], [17213, 22891, null], [22891, 28694, null], [28694, 34101, null], [34101, 39526, null], [39526, 45806, null], [45806, 52038, null], [52038, 58519, null], [58519, 63719, null], [63719, 64277, null], [64277, 64277, null], [64277, 64277, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 144, true], [144, 144, null], [144, 4954, null], [4954, 13107, null], [13107, 17213, null], [17213, 22891, null], [22891, 28694, null], [28694, 34101, null], [34101, 39526, null], [39526, 45806, null], [45806, 52038, null], [52038, 58519, null], [58519, 63719, null], [63719, 64277, null], [64277, 64277, null], [64277, 64277, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64277, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64277, null]], "pdf_page_numbers": [[0, 0, 1], [0, 144, 2], [144, 144, 3], [144, 4954, 4], [4954, 13107, 5], [13107, 17213, 6], [17213, 22891, 7], [22891, 28694, 8], [28694, 34101, 9], [34101, 39526, 10], [39526, 45806, 11], [45806, 52038, 12], [52038, 58519, 13], [58519, 63719, 14], [63719, 64277, 15], [64277, 64277, 16], [64277, 64277, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64277, 0.02899]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
b0706139deba351e177302bfa0e0b2b2727de2bb
AUTOMATED STUDENT CODE ASSESSMENT WITH SYMBOLIC EXECUTION AND JAVA PATHFINDER A Thesis Presented to the Faculty of California Polytechnic State University San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Karl Bell December 2012 COMMITTEE MEMBERSHIP TITLE: Automated Student Code Assessment with Symbolic Execution and Java PathFinder AUTHOR: Karl Bell DATE SUBMITTED: December 2012 COMMITTEE CHAIR: Dr. John Clements COMMITTEE MEMBER: Dr. David Janzen COMMITTEE MEMBER: Dr. Gene Fisher Abstract Automated Student Code Assessment with Symbolic Execution and Java PathFinder by Karl Bell The assessment of student code is a necessary part of most programming courses. However, many ways of assessing the correctness of student code can be very time-consuming and may be error-prone. This paper presents JSymTester, a tool which uses the symbolic execution framework of the Java PathFinder to find test inputs for student code and uses these inputs to extensively compare its functionality to a reference implementation. This allows for automatic testing of student code, relying only on the reference implementation and the student’s own implementation, eliminating the need to manually write tests. This tool was tested on small assignments for an introductory computer science course, and performed similarly to the existing, more traditional approaches of unit testing and output comparison. This shows that automated test generation techniques may, in general, be useful in the area of student code assessment. Contents List of Tables vii List of Figures viii 1 Introduction 1 2 Test Generation 3 2.1 Symbolic Execution .............................. 4 2.2 Concolic Execution .............................. 7 3 Related Work 11 3.1 Automated Test Generation in Java ................. 11 3.1.1 Tools for Automated Test Generation .......... 11 3.1.2 Concolic Execution in Java .................. 13 3.2 Automated Assessment ................................ 14 3.2.1 Tools for Automated Assessment ............... 14 3.3 Test Data Generation in Education .................. 15 4 Implementation 16 4.1 JPF Integration ................................... 17 4.1.1 Adding a main() Method ...................... 18 4.2 Object Reconstruction ............................. 18 4.3 SymTestRunner ................................... 19 4.4 Web-IDE Evaluator ............................... 20 5 Validation 21 5.1 Test Setup ...................................... 21 5.2 Results ................................................................. 22 6 Future Work ......................................................... 26 6.1 Beyond Code Coverage .......................... 26 6.2 Language Feature Support .................. 28 6.3 Ease of Use .............................................. 29 6.4 Model-Checking ................................. 29 6.5 Constructive Feedback ........................ 30 7 Conclusion ......................................................... 31 Bibliography .......................................................... 33 List of Tables 5.1 Overall test results .................................................. 22 5.2 Test results by exercise .............................................. 23 5.3 Failures not caught by evaluators ................................. 24 List of Figures 2.1 Example code and its symbolic execution tree ......................... 6 2.2 Example code and a concolic execution, step 1 ....................... 8 2.3 Example code and a concolic execution, step 2 ....................... 8 2.4 Example code and a concolic execution, step 3 ....................... 9 2.5 A code path which concolic testing can execute ..................... 9 Chapter 1 Introduction The assessment of student code is a necessary part of most programming courses, as it provides a way to see if learning goals are being met. It directs the learning process of the student and helps them to see what they need to spend more time on. Unfortunately, assessing student code manually can be difficult and time-consuming. In addition, it is difficult to judge the correctness of student code without spending a large amount of time understanding it.[14] For this reason, many instructors have seen fit to create their own automated testing tools to help assess student code, the majority of which have focused on examining student code functionality in some way. These tools can come in many forms, from scripting and output comparison, to using testing frameworks like XUnit.[14] One problem with these approaches is that an instructor must write a suite of tests (or sets of inputs and outputs) to validate that the student’s code performs as expected in all cases. Not only can this take a significant amount of time, it is possible that the generated inputs will not take into account the oddities of student code, as they are generally written before the student submits their code for evaluation.[14] One way to address this issue is to write assignments in such a way as to get the students to write their own tests. This is the case for test-driven learning approaches.[4] However, this leads to the problem of verifying that the student written tests actually reflect the results which the instructor desired, essentially requiring at least some instructor testing after all. This thesis presents the “JSymTester” tool for the Java programming language, which seeks to solve these problems by providing a way of automatically assessing student code given only a reference implementation of a programming exercise. It utilizes automated test generation techniques on this reference implementation as well as the student’s implementation to develop a suite of inputs which tests student code fully, using the instructor code as a test oracle. JSymTester is based on the symbolic execution module for the Java PathFinder[23], and can be used independently as a command-line application, or as as an evaluator for the WebIDE platform[4]. Section 2 of this thesis provides background information about test generation. Section 3 looks at some of the related work in automated test generation and automated student code assessment. Section 4 goes into detail about how the JSymTester works, and Section 5 describes the procedures for and the results of our evaluation of it. Section 6 provides a number of suggestions for future improvements to the tool and identifies other related avenues of possible research, and Section 7 offers concluding remarks. Chapter 2 Test Generation To understand the implementation of JSymTester, it makes sense first to examine automated test generation in general. The Test Generation problem can be stated as follows[6]: Given a sequential program with a set of input parameters, generate a set of test inputs that exercises as many program statements as possible. There has been a large amount of work in the area of automated test generation. In general, the goal of all of these is, as stated above, to maximize code coverage. This is generally done with some kind of input space exploration, in order to find inputs which cause particular paths to be executed. A simple way of finding test inputs is to randomly generate them. Indeed, this the is the concept behind blackbox fuzzing, which throws random inputs (or randomly mutated correct inputs) at a program to see how it behaves.[10] A more guided approach to this state space exploration can be taken by taking into account the structure of the program when generating test inputs. Rather than simply generating random inputs, one can use some knowledge of the program’s internals to generate test inputs which are likely to get the program follow different code paths. This is generally called whitebox testing, and comes in many forms.[10] The automated test generation tools which are most relevant to this thesis are based on a form of whitebox testing called dynamic test generation or concolic execution, which is itself an extension of a technique called symbolic execution. A number of tools exist which implement this test-generation strategy.[21, 9, 22] One of these tools is the symbolic execution module for the Java PathFinder, which was used to construct the JSymTester application described here.[23] It is useful to note that, traditionally, all of the path exploration techniques described above are accompanied by some kind of model-checking to ensure correctness, as there is no test oracle available.[23] In the case of this thesis, the lack of expected outputs is not a problem, as we have a test oracle in the form of the instructor’s reference implementation. 2.1 Symbolic Execution Symbolic execution is a type of program analysis which is based on running a program with symbolic inputs rather than real ones.[18] This means that variables referred to in the program, rather than being given a real value (such as an integer) are instead symbolic expressions based on the symbolic inputs to the program. For example, we might have a program which takes an input \( x \) and returns \( x + 5 \). With regular concrete execution, we would give \( x \) an actual value (say, 3), and we would observe the return value to be a an actual value (say, 8). With symbolic execution, we instead substitute a symbolic value for \( x \) (say, \( a \)). We would then express the return value as $a + 5$. The same holds for any return values or variables in a given program, in that they would all be expressed in terms of the symbolic inputs (in addition to hard-coded values like 5). However, just treating inputs as symbolic tends to give us little information, unless it also takes into account the branch points of the program. Doing so allows symbolic execution to be used to analyze the possible paths of a program.[23] Each possible code path has logical expressions which must be true in order for that code path to be followed. Since we are keeping track of all variables in terms of the symbolic inputs, these conditions can also be expressed in these terms. Each possible path through the program yields a set of logical expressions that are called that path’s path constraints. These constraints are expressed in terms of symbolic input to the program, and can be solved to generate a test input that would cause that particular path to be executed.[23] Figure 2.1 shows a bit of example code along with its symbolic execution tree. Each possible path down this tree represents a code path, and the relevant statements (conditionals and assignments) would be kept track of by a symbolic execution engine. Assignments must be tracked down the tree for later conditions to depend on. if (x == 5) { if (y > 0) { return x * y; } else { x = y; } } else if (y < 0) { return y; } return x; Figure 2.1: Example code and its symbolic execution tree. The usefulness of symbolic execution for test input generation is apparent: Symbolic execution provides sets of logical constraints for all possible paths, and each of these sets of constraints can be solved to find a test input which covers a different code path. However, symbolic execution does have some problems. First, it is limited by the constraint solver used. If a set of path constraints can’t be solved by the constraint solver, then the symbolic execution engine can’t generate inputs which would cause that path to be executed. This can happen frequently when there is some complicated math going on, such as with hash functions, which are deliberately hard to reverse.[7] Second, symbolic execution has a hard time modeling system calls and non-deterministic functions.[7] In both of these cases, the outputs of these functions have unknown constraints. Their behavior may modify the code path followed in un-predictable ways, and branches based on their results are more or less impossible to enter unless the methods can be abstracted out or controlled by a 2.2 Concolic Execution Concolic testing, or dynamic test generation, extends symbolic execution by interleaving concrete execution so that portions of a program which are difficult to reason about symbolically (such as hash functions) can be abstracted out by using actual values in these portions, instead of using symbolic ones.[6] Concolic testing works by running through the program using concrete inputs, collecting the path constraints (again, in terms of the symbolic inputs) which are true of those inputs as the program runs. The first set of inputs is usually either randomly generated or set to some common or ordinary values. Once the first run is completed, the path constraints for that run are known. The last of these constraints is then falsified, generating a new set of inputs which can be fed into the process again. This process continues until there are no more solvable paths left to explore.[20] This process is illustrated in Figures 2.2 through 2.4, using the example code from Figure 2.1. In Figure 2.2, the code is run using the shown inputs. This causes the code to run through the path highlighted in green. The path constraints are collected as they are encountered, resulting in the list shown as “PC.” \[ x = 0, \ y = 0 \] \begin{verbatim} if (x == 5) { if (y > 0) { return x * y; } else { x = y; } } else if (y < 0) { return y; } return x; \end{verbatim} PC: \((x != 5, y >= 0), \text{ret} == x\) **Figure 2.2: Example code and a concolic execution, step 1.** In Figure 2.3, the last condition from the previous step is falsified, resulting in new input values. The code is run through again to gather the path conditions for this path. \[ x = 0, \ y = -2 \] \begin{verbatim} if (x == 5) { if (y > 0) { return x * y; } else { x = y; } } else if (y < 0) { return y; } return x; \end{verbatim} PC: \((x != 5, y < 0), \text{ret} == y\) **Figure 2.3: Example code and a concolic execution, step 2.** In Figure 2.4, since no new conditions are added, the last previously unfalsified condition is falsified to get new inputs. This is then run through the code to get a new path constraint. This process continues until all paths which can be solved for have been explored (this has not been illustrated, for the sake of brevity). \[ x = 5, \ y = -2 \] ```c if (x == 5) { if (y > 0) { return x * y; } else { x = y; } } else if (y < 0) { return y; } return x; ``` PC: (x == 5, y <= 0), ret == y ![Figure 2.4: Example code and a concolic execution, step 3.](image) Concolic testing improves on symbolic execution in two ways. First, it guarantees that any inputs generated do, in fact, cause the expected path to be followed. After all, it has a concrete run of the program to prove this. Second, it allows the program to execute paths which may be impossible to run using just symbolic execution.[6] To illustrate this, imagine a program like the one in Figure 2.2. ```c if (x == hash(y)) { // do something interesting } ``` ![Figure 2.5: A code path which concolic testing can execute](image) The hash function in this example is likely something which it is difficult to reason about symbolically.[6] Doing so would constitute reversing the hash, something which the hash would likely be designed to prevent. The conditions it introduced on the input would likely be unsolvable by most constraint solvers (at least in a reasonable time). However, since concolic execution will have recorded the actual result of the hash call, it will know what a given y value hashes to. It can then set x to this value while keeping y stable, thus solving the condition for this branch. It is also worthwhile to note that this is an advantage of concolic execution over standard blackbox fuzzing, which would just throw random values at this for a while. Such a process is very unlikely to hit the condition where x is equal to the hash of y, though concolic execution can handle this case quite easily.[6] There have been numerous advances in the area of dynamic test generation in the past few years which have increased its effectiveness. Many new strategies and algorithms have made the technique much faster and more able to target relevant areas of code.[6, 1, 18, 12] Chapter 3 Related Work There is a large body of work in the fields of test generation and specifically symbolic execution, as well as in automated assessment. However, there has been limited work in the combining of these two fields. 3.1 Automated Test Generation in Java There are a number of frameworks that have been created for automated test generation in the Java programming language. These tools incorporate a number of different test generation techniques, which, while they are not used by the J Sym Tester, may also be applicable in the realm of student assessment. 3.1.1 Tools for Automated Test Generation Many of the tools available for test generation use some variation of random test generation, which involves throwing random inputs at programs to generate interesting behavior. Randoop is one of these.[17] It implements a technique called feedback-directed random testing, which uses execution feedback gathered from executing test inputs as they are created in order to avoid generating redundant and illegal inputs. It does this by exploring the method sequences space for objects based on a specified contract. Those method sequences which cause a violation of the contract are marked as such, and sequences do not are saved as regression tests.[17] Another category of test-generation tools use a technique called evolutionary testing. This uses evolutionary algorithms to evaluate the “fitness” of test inputs based off of particular criteria such as statement coverage, branch coverage, or size of generated tests. These inputs are then mutated in an attempt to derive a more “fit” set of inputs. This technique can be used either on the method level to explore method inputs, or at the class level to find method call sequences. Two tools which implement this approach are TestFul and EvoSuite.[2, 5] Many test-generation tools focus on the specification of contracts for methods or objects which can be used to generate tests. One such tool is Korat, which uses formal specifications for methods to generate tests. Korat takes these formal specification in the form of method pre- and post-conditions. Korat generates test inputs by exploring the space of inputs which satisfy these pre-conditions to generate a set of inputs, then determines whether the post-conditions are met by the run of the method. This allows for generating valid failing tests in the absence of a test oracle.[3] A somewhat different, but related field of research is that of test case testing, which aims to determine the quality of a given test suite. One of the popular ways of doing so is using mutation testing, which mutates particular (or random) parts of the source code which is meant to be tested, in order to see if this mutation breaks the tests. Javalanche and Jester are two examples of tools using this technique.[16, 19] While this is not directly related to this thesis, it may be useful to use in the future to determine how well the JSymTester functions by using it to compare a randomly mutated reference implementation to the original. 3.1.2 Concolic Execution in Java While these tools are all useful, this thesis focuses on the use of symbolic and concolic execution in Java, for which there are a few frameworks.[20, 23, 24] The two most well-known of these are jCUTE and the Java PathFinder. jCUTE jCUTE was developed at the University of Illinois at Urbana-Champaign, and provides a concolic unit testing engine for Java.[20] It is based on CUTE for C, and works by tracking values in memory and constraints on them to implement Concolic execution. It does not appear to be under active development, and the source was not readily available. Java PathFinder JSymTester utilizes the Java PathFinder, an open source tool maintained by the NASA Ames Research Center which is currently under active development.[23] The Java PathFinder (JPF) functions by providing a reimplementation of the Java Virtual Machine in Java which allows developers using the framework to instrument the running of Java programs at the bytecode level. Java PathFinder includes a module for symbolic and concolic test generation based on this bytecode instrumentation. This framework was used for this project. 3.2 Automated Assessment A useful survey of automated assessment tools was performed by Ihantola et al in 2010.[14] This survey described trends among a number of different automated testing tools, and found that most automated assessment is being done by assessing functionality (that is, the behavior of the program), rather than using other analytic tools to assess style or performance. The survey listed the different approaches which had been taken to the problem of analyzing student code, most of which depended on output comparison, or unit testing (though often in combination with scripts or other testing frameworks).[14] One of the key problems identified by this survey is that most assessment tools are written as one-off programs, usually just for the purposes of one class or one assignment. Many instructors write their own frameworks, though there are some more generalized tools which are widely known. This reveals a problem in that most tools are not written to be generally applicable and easy to use, though some attempts have been made.[14] 3.2.1 Tools for Automated Assessment One such automated assessment tool is Web-CAT, which tries to get students to generate tests for their own code, but also provides tools for instructors to automate other testing of student code.[4] Since Web-CAT is extensible, it may be worthwhile to attempt to adapt the JSymTester to it in the future. As mentioned above, one-off scripts or suites of unit tests are also used as automated assessment tools. These have the advantage of being generally easy to write and run, but rely solely on some form of output comparison in order to function. This means both that all those outputs must be determined beforehand, and that they must be determined by hand or with a reference implementation. Not only can this take a significant amount of time, it is possible that the generated inputs will not take into account the oddities of student code, as they are generally generated before the student submits their code for evaluation, or at least without direct knowledge of the structure of all submitted student programs. WebIDE, a teaching platform developed at Cal Poly, also incorporates automated assessment of student code correctness. It works by sending code to “evaluators” which are implemented as web servers which compile and run this code, testing with whatever unit tests the instructor provides. There are also evaluators for standard output comparison. The WebIDE evaluator framework provides a relatively simple way to express the tests which must be performed on student code, and does not necessarily entail use of the student’s code in generation of these tests.[4] 3.3 Test Data Generation in Education A similar form of test input generation for assessing student submissions has been tried once before.[13] This tool was built using an early version of JPF’s symbolic testing framework which was not publicly available at the time. As such, the tool was not made available. In addition, the symbolic framework of JPF has since changed a large amount, meaning the work done in [13] would no longer be compatible with the current JPF codebase. While the tool was somewhat related to JSymTester, the paper itself focused more on strategies for test generation and use of the JPF’s model checking techniques, rather than on the creation of a tool based solely on symbolic execution. Chapter 4 Implementation JSymTester is based on the symbolic execution module for the Java PathFinder, and can be used independently as a command-line application, or as an evaluator for the WebIDE platform. In general, it takes two classes (one specified as the reference class, and one as the test class), and the name of one method to test. It then performs concolic test generation on the reference class and test class to get a set of inputs. It runs these inputs through the reference implementation to get expected results, then runs the same inputs through the test implementation, comparing the results. Finally, it outputs results, as a list of successful and/or failed inputs. A more detailed description follows. 1. Load test and reference classes 2. Check for the test method on each class 3. Add or replace the main() method for each class via bytecode manipulation 4. Run JPF Concolic Execution on the reference implementation 5. Run JPF Concolic Execution on the test implementation 6. Rebuild a list of inputs from the results of the two above runs, reconstructing primitives and objects as necessary 7. For each input, run both the reference implementation and the test implementation, recording differences 8. Output differences, if any were recorded 4.1 JPF Integration In order to do concolic execution of Java code, JSymTester utilizes the Java PathFinder, described above. JSymTester does this by starting up a JPF JVM using the symbolic execution framework’s bytecode instrumentation. It specifies the reference class and the method to test and a Listener object (based on the JPF symbolic framework’s provided SymbolicListener) to watch the execution. The JPF begins execution using the main() method of the specified class, and the listener waits for execution to enter the method being tested. Once it enters that method, the listener waits for that method to return, at which point it records the path constraints of that run through the method, and the values of the inputs which satisfy these path constraints. The symbolic execution framework will then negate some part of this condition and run through the method again, and, again, the listener will record the path constraints at return time. This happens until there are no more negatable path conditions. The Listener has then built up a list of path conditions and their solutions, which can be accessed by the JSymTester. This same thing is done again for the test version of the class, getting another set of inputs. 4.1.1 Adding a main() Method One problem with the above sequence of events is that the JPF must have some point of entry to even begin execution at all. This means that any class which is being tested must have a main() method. This is a limitation of the JPF which is rather inconvenient if you wish to use it to test only a small function which doesn’t necessarily have to run as a whole program. Rather than forcing instructors to ensure that both their and their students’ code have main() methods in them, JSymTester automatically adds a main() method for them using bytecode manipulation. Before beginning JPF execution, the JSymTester uses reflection to inspect the class being tested for a main() method. If a main() method does exist, it is removed. The JSymTester then adds a main() method to the object’s bytecode using Javassist, a bytecode manipulation framework. This can then serve as the entry point for the JPF, without forcing instructors or students to write main() methods or ensure that their main() methods call the method they wish to test. 4.2 Object Reconstruction Once the JSymTester has a list of inputs which each represent a different code path, it has to actually run the reference class with these inputs to generate the expected results. The solutions which are provided by the JPF’s symbolic execution are in the form of classes used by that code (SymbolicIntegers and SymbolicReals), which can’t be passed directly to the method under test. However, these classes do maintain a concrete representation of these values (as is required by concolic testing). In order to actually call the method, JSymTester must convert inputs from this form to actual Java primitives or Java objects. For primitives, this is relatively simple, as byte, short, char, int, and long data types can easily be converted from the concrete value stored in SymbolicIntegers, and float and double types can easily be retrieved from SymbolicReals. In addition, booleans also are not much work, as they are stored as SymbolicIntegers of value 0 or 1, which are easily converted. The problem arises when the method being tested takes some arbitrary object as an input. The JPF symbolic execution framework supports this by using lazy initialization.[15] The output of this is a number of integers and reals which represent the parent object and its fields (and fields of any field objects, recursively). This must be reconstructed into the actual object in order to run the actual implementation. To reconstruct these objects, JSymTester constructs a key-value map to represent each input object. The keys are the field names, and the values are the values of these fields (either as a Java primitive or as a another map representing another object). This map can then be used, along with reflection, the rebuild the object to be passed in. JSymTester does this using the ObjectMapper provided by the Jackson JSON library. 4.3 SymTestRunner The command-line version of the tool works as follows: SymTestRunner [-v] REFCCLASS TESTCLASS METHODNAME **REFCLASS** is the name of the reference class to use, and **TESTCLASS** is the name of the test class to use. Both of these must be in the classpath. **METHODNAME** is the name of the method to be tested. The program will either output “Successful Run” on success, or a list of failures if any outputs don’t match between the reference and test classes. The output can also be tweaked to be more verbose. ### 4.4 Web-IDE Evaluator JSymTester was also adapted to be used as a WebIDE evaluator. WebIDE evaluators are web services which can be used to evaluate some piece of student code. Specifically, the JSymTester evaluator takes three arguments: **refClass**, **testClass**, and **methodName**. These mirror the parameters of the command line version, but the classes are specified in source code form rather than as a class on the classpath. The evaluator compiles these on-the-fly and runs them through the JSymTester, returning the same results as the command-line version. To make use of this evaluator easier, there is also a version which takes both pieces of code in the form of method definitions, rather than the code for whole classes. This allows instructors writing labs for WebIDE to have the students just write a simple method without having to wrap it in a class. This evaluator encapsulates the source code in a class automatically, then compiles it and sends it to the JSymTester just as the first evaluator does. Chapter 5 Validation The most important aspect of the JSymTester is that it is able to identify whether student code behavior matches the reference implementation. To this end, this section of this thesis attempts to compare the results of the JSymTester to the results from the more traditional unit tests or scripts written to test student code. 5.1 Test Setup The first step in performing this comparison was to determine where to get sample student inputs. In order to do this in a sensible way, I chose to pass all student inputs for two classes for two Web-IDE labs to the JSymTester. I modified two previously-written introductory labs focusing on if statements and functions, adding the JSymTester evaluator to all of the evaluation steps which used unit testing or Java function call output comparison. This passed student code, along with a reference implementation of the exercise, to a JSymTester evaluator. This modified JSymTester evaluator always returned success to the user, in order to avoid allowing any problems in its methods or implementation to affect student progress. However, it allowed me to collect a number of code samples along with the results of the previously-written evaluators. I then compared the results of the old evaluators with that of the JSymTester. The hope was that the JSymTester would not report any submissions as correct for which the manually-written tests would fail (unless those tests happened to be written poorly). In addition, it could catch errors in student code which the manually-written tests do not; this would be further evidence of its efficacy. 5.2 Results The 11 exercises across these two labs received a total of 1306 submissions from students across two classes. Of these, 806 submissions were able to be compiled and run without timing out. One of these submissions used some math which was not solvable by the Choco constraint solver used by JSymTester, so I have excluded it from the results (it was not implemented correctly, in any case). I recorded all of the results for the remaining 805, both from the JSymTester and from the original evaluators. I also manually inspected all submissions which did not textually match a known good solution, in order to verify correctness. The results can be seen in Table 5.1. A “Success” represents a piece of code which passed the given validation, while a “Failure” means it failed that validation. <table> <thead> <tr> <th></th> <th>Original Evaluators</th> <th>JSymTester</th> <th>Manual Inspection</th> </tr> </thead> <tbody> <tr> <td>Successes</td> <td>554</td> <td>570</td> <td>519</td> </tr> <tr> <td>Failures</td> <td>251</td> <td>235</td> <td>286</td> </tr> </tbody> </table> Table 5.1: Overall test results The results show that the JSymTester is slightly behind the original evaluators in overall accuracy. It does not catch all of the problems that were caught by the original evaluators. However, if we inspect the data at the level of the individual exercises, the situation changes slightly. Table 5.2 shows the results for each exercise, with Success shortened to “S”, and Failure shortened to “F.” <table> <thead> <tr> <th>Exercise</th> <th>JST S</th> <th>JST F</th> <th>Orig. S</th> <th>Orig. F</th> <th>Actual S</th> <th>Actual F</th> </tr> </thead> <tbody> <tr> <td>AA</td> <td>43</td> <td>0</td> <td>41</td> <td>2</td> <td>41</td> <td>2</td> </tr> <tr> <td>AA2</td> <td>42</td> <td>40</td> <td>50</td> <td>32</td> <td>38</td> <td>44</td> </tr> <tr> <td>AC</td> <td>99</td> <td>5</td> <td>54</td> <td>50</td> <td>54</td> <td>50</td> </tr> <tr> <td>C</td> <td>36</td> <td>7</td> <td>36</td> <td>7</td> <td>36</td> <td>7</td> </tr> <tr> <td>D</td> <td>43</td> <td>2</td> <td>43</td> <td>2</td> <td>43</td> <td>2</td> </tr> <tr> <td>B.</td> <td>69</td> <td>22</td> <td>80</td> <td>11</td> <td>69</td> <td>22</td> </tr> <tr> <td>C.</td> <td>49</td> <td>39</td> <td>56</td> <td>32</td> <td>49</td> <td>39</td> </tr> <tr> <td>D.</td> <td>61</td> <td>54</td> <td>64</td> <td>51</td> <td>61</td> <td>54</td> </tr> <tr> <td>E.</td> <td>36</td> <td>48</td> <td>38</td> <td>46</td> <td>36</td> <td>48</td> </tr> </tbody> </table> Table 5.2: Test Results By Exercise. Table 5.2 shows that nearly all of the cases where JSymTester failed were in one particular exercise. This is even easier to see in Table 5.3, which shows the number of failures not caught by each evaluator (excluding those exercises where the evaluators missed nothing). Table 5.3: Failures Not Caught By Evaluators. <table> <thead> <tr> <th>Exercise</th> <th>Caught only by JST</th> <th>Caught only by Orig.</th> <th>Caught by neither</th> </tr> </thead> <tbody> <tr> <td>AA</td> <td>0</td> <td>2</td> <td>0</td> </tr> <tr> <td>AA2</td> <td>8</td> <td>0</td> <td>4</td> </tr> <tr> <td>AC</td> <td>0</td> <td>45</td> <td>0</td> </tr> <tr> <td>B</td> <td>11</td> <td>0</td> <td>0</td> </tr> <tr> <td>C</td> <td>7</td> <td>0</td> <td>0</td> </tr> <tr> <td>D</td> <td>3</td> <td>0</td> <td>0</td> </tr> <tr> <td>E</td> <td>2</td> <td>0</td> <td>0</td> </tr> <tr> <td>Total</td> <td>31</td> <td>47</td> <td>4</td> </tr> </tbody> </table> The troublesome exercise, which I have labeled AC, gave the student the task of implementing an age calculator. This method took the current year, month, and day, and well as the year, month, and day of a person’s birthday. It then returned the age of this person in years, rounding down, as is commonly done. Most of the student implementations for this exercise did not use any branches at all. Neither did the reference implementation. This meant that after one pass through the method, there were no constraints gathered to be falsified. The concolic execution would cease, providing only one input—usually 0 for every single argument. This meant that only this one input was being passed to student ageCalculator implementations. Many of these implementations were incorrect, but happened to give the correct results for this case. This reveals a larger problem with JSymTester and concolic testing in general: They are aimed at generating inputs to maximize code coverage. They do not generate inputs to test particular mathematical equations. This is the cause of the mistakes in exercise AC as well as the six other misses in exercises AA and AA2. This is a deficiency in concolic testing, so it is slightly out of the scope of this paper. Some possible solutions are provided in Chapter 6, however. Dropping the AC exercise from the analysis would put JSymTester in a much more favorable position, mistaking only six bad implementations for good ones, and correctly identifying 31 incorrect implementations. This provides evidence that the JSymTester, while it underperforms on code which has few branch points, actually performs as well or better than the traditional approaches on code which has more branch points. Chapter 6 Future Work There are a number of options for expanding the potential uses of the JSymTester. In addition, there are also many possible avenues of research related to the use of automated test generation in automated student code assessment. 6.1 Beyond Code Coverage The key problem explained in the Results section of this thesis is that the JSymTester is designed to find inputs which provide good code coverage, not to find all possible interesting inputs. This means that for many programs which have few branches, the JSymTester will find only a small number of inputs. These inputs may not fully test the code. For example, consider a function which simply takes an integer \( x \) and returns \( x + 2 \), and another function which takes \( x \) and then just returns 2, ignoring \( x \) completely. JSymTester would likely not find any differences between these two functions. This is because the JPF will start out by giving each function an \( x \) value of 0. In each case, since there are no branches, it will gather no path constraints, have nothing to falsify, and consider its input generation done. Passing an \( x \) value of 0 to both of these functions results in a return value of 2, passing JSymTester’s comparison of the output of the two functions. Obviously, this can lead to problems for certain coding exercises. There are a couple solutions to this which could be implemented in the future. First, some extra values could be picked to run through the program. This could be done either only for the first run through the program, or for all. It may be worthwhile to pick a few random inputs to start with, or use a couple different inputs for each path constraint. This would require some extra work integrating with the constraint solver to force it to give multiple solutions to the same path constraint. Alternative test generation strategies, such as evolutionary testing, could also be applied here to generate these additional values. Second, it may be possible to compare the symbolic return values of the methods in question. JSymTester already records these in terms of the symbolic input variables, so the data is available. This would also require some more integration with the constraint solver, however, since it would require comparing constraints for logical equivalence. Just comparing the particular representation of a set of constraints would not be accurate, as the constraints may have been gathered be in different orders or in different terms (for example, \( x < 5 \) rather than \( x \leq 4 \)). 6.2 Language Feature Support The current implementation of the JSymTester focuses only on testing code at the method level. It supports inputs and outputs in the form of all Java primitives, as well as any objects composed only of these primitives. This does not include many of the classes provided by the Java class library, including Strings. As it is currently written, JPF’s symbolic execution can support either lazily-initialized objects (as it used by JSymTester), or Java String objects, but not both at the same time. JSymTester could likely bypass some of this by running the JPF twice, once in each mode. The added complexity of this was not implemented for this thesis, as none of the student code examined used Strings in branch conditions. It would also be useful to support more options of units to test. JSymTester itself supports testing only methods. Adding testing of constructors would be helpful for evaluating student object construction. However, the JPF’s symbolic execution framework is currently targetted at testing methods, meaning that it may be somewhat difficult to make these changes without also modifying the symbolic execution module. JSymTester could also add some options for testing out class implementation as a whole, looking at calling different methods of objects in different orders. This would require taking advantage of other topics in test generation (namely, method sequence exploration).[13] In addition, the way which JSymTester reconstructs objects means that it does not support comparison by reference of the input objects (as all input objects are created separately). This should be possible to rectify with a more advanced integration with the JPF, as it handles this case. 6.3 Ease of Use There are a few things which could be added to the tool to make it easier to use. The first of these would be removing the need for a default constructor to exist for any objects used as parameters to the methods being tested. This is a limitation of the Jackson ObjectMapper being used to reconstruct the objects, but should be easily overcome using bytecode manipulation (much as was done for the required main() method). In addition, it would be useful to allow the instructor to specify an equals method to be used to compare the output of the test code with that of the reference code. As it is, the system uses the equals method of the object being returned. A custom method would mean that instructors could compare only a subset of fields, or use different comparisons for different exercises without having to modify the classes in use. 6.4 Model-Checking It is worthwhile to note that JSymTester does not take advantage of any of the model-checking capabilities of the JPF. This may be a useful addition in the future, to allow instructors to be able to write something which can more deeply inspect the student’s code as it runs to check for certain things other than just correct output. This would, however, require the instructor to learn a bit about the JPF, something which JSymTester does not require as currently implemented. 6.5 Constructive Feedback Finally, it may be worthwhile to assess the effectiveness of the feedback which JSymTester (or any other such tool) can provide. With the path constraints created with symbolic execution, it is possible to give the student feedback which defines the conditions under which their code fails in the abstract rather than stating exact values. For example, instead of saying “Your code failed for inputs: \( x = 5, y = 6 \),” the JSymTester could reply instead with a more informative message like “Your code failed under the following conditions: \( x < y, y > 5 \).” This allows students to approach their error from the standpoint of conditions which actually exist in their code, rather than focusing on specific inputs (in the latter case, they might even be tempted to simply add a specific special case for a failing input!). To evaluate which of these approaches might be better, it would be best to try both approaches with different sets of students to see which group finds the messages more helpful. Current research suggests that the abstract information may be more helpful.[13] Chapter 7 Conclusion The JSymTester is a new kind of student assessment tool utilizing symbolic execution to automatically generate tests for student code that are actually based on the conditions in that code. This is a benefit for professors, as it is easy to simply implement a reference implementation of a programming exercise, instead of having to write tests which can only hope to cover all of the edge cases which may be present in student code. The JSymTester is based on the Java PathFinder symbolic execution module, and so can benefit from any additional capabilities added to it. It is available both as a command line tool, and as a Web-IDE evaluator. In a simple trial comparing the JSymTester WebIDE evaluator against the more traditional output comparison evaluators, the JSymTester behaved similarly to the original evaluators. Though it behaved poorly in one case, in other cases, it found errors that the traditional testing did not. This provides evidence that the JSymTester can effectively supplement output comparison, and, with a few more tweaks, even replace it. Furthermore, this shows that automated test generation techniques are applicable in the area of student code assessment. Bibliography
{"Source-Url": "http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1942&context=theses", "len_cl100k_base": 10045, "olmocr-version": "0.1.50", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 75141, "total-output-tokens": 13192, "length": "2e13", "weborganizer": {"__label__adult": 0.0006041526794433594, "__label__art_design": 0.00047659873962402344, "__label__crime_law": 0.0004737377166748047, "__label__education_jobs": 0.015167236328125, "__label__entertainment": 0.00012242794036865234, "__label__fashion_beauty": 0.00026035308837890625, "__label__finance_business": 0.0002779960632324219, "__label__food_dining": 0.0005927085876464844, "__label__games": 0.00095367431640625, "__label__hardware": 0.0008015632629394531, "__label__health": 0.0005474090576171875, "__label__history": 0.0003402233123779297, "__label__home_hobbies": 0.00014519691467285156, "__label__industrial": 0.00047397613525390625, "__label__literature": 0.000637054443359375, "__label__politics": 0.0004572868347167969, "__label__religion": 0.0007467269897460938, "__label__science_tech": 0.006259918212890625, "__label__social_life": 0.0002741813659667969, "__label__software": 0.004337310791015625, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.0005593299865722656, "__label__transportation": 0.0009675025939941406, "__label__travel": 0.0003306865692138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50746, 0.03647]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50746, 0.69463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50746, 0.91097]], "google_gemma-3-12b-it_contains_pii": [[0, 304, false], [304, 304, null], [304, 568, null], [568, 1599, null], [1599, 2579, null], [2579, 3184, null], [3184, 3433, null], [3433, 3830, null], [3830, 4972, null], [4972, 6628, null], [6628, 7729, null], [7729, 9445, null], [9445, 10786, null], [10786, 12056, null], [12056, 13295, null], [13295, 14062, null], [14062, 15195, null], [15195, 16365, null], [16365, 17223, null], [17223, 19190, null], [19190, 20593, null], [20593, 22240, null], [22240, 24006, null], [24006, 25008, null], [25008, 26521, null], [26521, 28116, null], [28116, 29581, null], [29581, 31021, null], [31021, 31945, null], [31945, 33781, null], [33781, 35252, null], [35252, 37219, null], [37219, 37792, null], [37792, 38756, null], [38756, 40360, null], [40360, 42045, null], [42045, 43461, null], [43461, 44580, null], [44580, 45729, null], [45729, 45795, null], [45795, 46937, null], [46937, 48244, null], [48244, 49794, null], [49794, 50746, null]], "google_gemma-3-12b-it_is_public_document": [[0, 304, true], [304, 304, null], [304, 568, null], [568, 1599, null], [1599, 2579, null], [2579, 3184, null], [3184, 3433, null], [3433, 3830, null], [3830, 4972, null], [4972, 6628, null], [6628, 7729, null], [7729, 9445, null], [9445, 10786, null], [10786, 12056, null], [12056, 13295, null], [13295, 14062, null], [14062, 15195, null], [15195, 16365, null], [16365, 17223, null], [17223, 19190, null], [19190, 20593, null], [20593, 22240, null], [22240, 24006, null], [24006, 25008, null], [25008, 26521, null], [26521, 28116, null], [28116, 29581, null], [29581, 31021, null], [31021, 31945, null], [31945, 33781, null], [33781, 35252, null], [35252, 37219, null], [37219, 37792, null], [37792, 38756, null], [38756, 40360, null], [40360, 42045, null], [42045, 43461, null], [43461, 44580, null], [44580, 45729, null], [45729, 45795, null], [45795, 46937, null], [46937, 48244, null], [48244, 49794, null], [49794, 50746, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50746, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50746, null]], "pdf_page_numbers": [[0, 304, 1], [304, 304, 2], [304, 568, 3], [568, 1599, 4], [1599, 2579, 5], [2579, 3184, 6], [3184, 3433, 7], [3433, 3830, 8], [3830, 4972, 9], [4972, 6628, 10], [6628, 7729, 11], [7729, 9445, 12], [9445, 10786, 13], [10786, 12056, 14], [12056, 13295, 15], [13295, 14062, 16], [14062, 15195, 17], [15195, 16365, 18], [16365, 17223, 19], [17223, 19190, 20], [19190, 20593, 21], [20593, 22240, 22], [22240, 24006, 23], [24006, 25008, 24], [25008, 26521, 25], [26521, 28116, 26], [28116, 29581, 27], [29581, 31021, 28], [31021, 31945, 29], [31945, 33781, 30], [33781, 35252, 31], [35252, 37219, 32], [37219, 37792, 33], [37792, 38756, 34], [38756, 40360, 35], [40360, 42045, 36], [42045, 43461, 37], [43461, 44580, 38], [44580, 45729, 39], [45729, 45795, 40], [45795, 46937, 41], [46937, 48244, 42], [48244, 49794, 43], [49794, 50746, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50746, 0.07508]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
2ff22888e6ff28f697c2a141fa93f716bfc38dc9
**Improving the Search Capabilities of a CFLP(FD) System** Ignacio Castiñeiras\(^1\), Fernando Sáenz-Pérez\(^2\) ncasti@fdi.ucm.es, fernan@sip.ucm.es Dept. Sistemas Informáticos y Computación \(^1\) Dept. Ingeniería del Software e Inteligencia Artificial \(^2\) Universidad Complutense de Madrid, Spain **Abstract:** In this work we focus on the CFLP system \(TOY(FD)\), implemented in SICStus Prolog and supporting \(FD\) constraints by interfacing the external CP(\(FD\)) solvers of Gecode and ILOG Solver. We extend \(TOY(FD)\) with new search primitives, in a setting easily adaptable to other Prolog CLP or CFLP systems. We describe the primitives from a solver-independent point of view, pointing out some novel concepts not directly available in any CP(\(FD\)) library we are aware of, as well as how to specify some search criteria at \(TOY(FD)\) level and how easily these strategies can be combined to set different search scenarios. We describe the implementation of the primitives, presenting an abstract view of the requirements and how they are targeted to the Gecode and ILOG libraries. We evaluate the resulting \(TOY(FD)\) architecture and we use some benchmarks to prove that the use of the search strategies improve its solving performance. **Keywords:** CFLP, FD Search Strategies, Solver Integration 1 Introduction The use of *ad hoc* search strategies has been identified as a key point for solving Constraint Satisfaction Problems (CSP's) [Tsa93], allowing the user to interact with the solver in the search of solutions (exploiting its knowledge about the structure of the CSP and its solutions). Different paradigms provide different expressivity for specifying search strategies: Constraint Logic Programming CLP(\(FD\)) [JM94] and Constraint Functional Logic Programming CFLP(\(FD\)) [Han07] provide a declarative view of this specification, in contrast to the procedural one offered by Constraint Programming CP(\(FD\)) [MS98] systems (which make the programming of a strategy to depend on low-level details associated to the constraint solver, and even on the concrete machine the search is being performed). Also, due to their model reasoning capabilities, CLP(\(FD\)) and CFLP(\(FD\)) treat search primitives as simple expressions, making possible to: (1) Place a search primitive at any point of the program, (2) Combine several primitives to develop complex search heuristics, (3) Intermix search primitives with constraint posting, and (4) Use indeterminism to apply different search scenarios for solving a CSP. The main contribution of this paper is to present a set of search primitives for CLP(\(FD\)) and CFLP(\(FD\)) systems implemented in Prolog, and interfacing external CP(\(FD\)) solvers with a \(^*\) This work has been partially supported by the Spanish projects TIN2008-06622-C03-01, UCM-BSCH-GR58/08-910502, and S2009TIC-1465 C++ API. The motivation of this approach is to take advantage of: (i) The high expressivity of CLP(FD) and CFLP(FD) for specifying search strategies, and (ii) The high efficiency of CP(FD) solvers. The paper focuses on the CFLP(FD) system TOY(FD) [FHSV07], more precisely in the system versions TOY(FDg) and TOY(FDi) [CS12], interfacing the external CP(FD) solvers (with C++ API) of Gecode 3.7.3 [STL12] and IBM ILOG Solver 6.8 [IBM10], resp. TOY(FD) is completely developed in SICStus Prolog [Mat12]. Their programs follow a syntax mostly borrowed from Haskell [PJ02], with the remarkable exception that program and type variables begin with upper-case letters whereas data constructors, types and functions begin with lower-case. Instead of using an abstract machine for running bytecode or intermediate code from compiled programs, the TOY compiler uses SICStus Prolog as an object language [LLR93]. Regarding search, TOY(FD) offers two possibilities up to now: (1) Defining a new search from scratch at TOY(FD) level (with a TOY(FD) function that use reflection functions to represent the search procedure), and, (2) Using the search primitive labeling, which simply relies on predefined search strategies already existing in Gecode and ILOG, resp. The use of external CP(FD) solvers (with C++ API) opens a third possibility, which we exploit in this paper: Enhancing the search language of TOY(FDg) and TOY(FDi) with new parametric search primitives, implementing them in Gecode and ILOG by extending their underlying search libraries. The paper is organized as follows: Section 2 presents an abstract description of the new parameterizable TOY(FD) search primitives, pointing out some novel concepts not directly available in any CP(FD) library we are aware of, as well as how to specify some search criteria at TOY(FD) level and how easily these strategies can be combined to set different search scenarios. Section 3 describes the implementation of the primitives in TOY(FD), presenting an abstract view of the requirements, how they are targeted to the Gecode and ILOG libraries and a evaluation of the resulting architecture of the system. Section 4 presents some preliminary although encouraging case studies, showing that the use of the search strategies improve the solving performance of both TOY(FDg) and TOY(FDi). Finally, Section 5 presents some conclusions and future work. 2 Search Primitives Description This section presents eight new TOY(FD) primitives for specifying search strategies, allowing the user to interact with the solver in the search for solutions. These primitives bridge the gap between the other two classical approaches available in TOY(FD): Defining a whole search procedure at TOY level (by using reflection functions), and relying on the set of predefined search strategies available in the solver library. Each primitive has its own semantics, and it is parameterizable by several basic components. The search primitives are considered by the language as simple expressions, so intermixing search strategies with the regular posting of constraints is allowed. The section describes the primitives and their components (including its type declaration) from an abstract, solver independent point of view. It emphasizes: (1) Some novel search concepts arisen, which are not available in the predefined search strategies of any CP solver, (2) How easy and expressive it is to specify some search criteria at TOY(FD) level, and (3) The appealing possibilities TOY(FD) offers to apply different search strategies for solving a CP problem. 2.1 Labeling Primitives **Primitive lab** \[ \text{lab :: varOrd \to valOrd \to int \to [int] \to bool} \] This primitive collects (one by one) all possible combinations of values satisfying the set of constraints posted to the solver. It is parameterized by four basic components. The first and second ones represent the variable and value order criteria to be used in the search strategy, resp. To express them we have defined in TOY the enumerated datatypes `varOrd` and `valOrd`, covering all the predefined criteria available in the Gecode documentation [STL13]. They also include a last case (userVar and userVal, resp.) in which the user implements its own variable/value selection criteria at TOY(FD) level. The third element \(N\) represents how many variables of the variable set are to be labeled. This represents a novel concept not available in the predefined search strategies of any CP solver. The fourth argument represents the variable set \(S\). Thus, the search heuristic uses `varOrd` to label just \(N\) variables of \(S\). Next TOY(FD) program (top) and goal (bottom) show how expressive, easy and flexible is to specify a search criteria in TOY(FD). In the example, the search strategy of the goal uses the userVar and userVal selection criteria (specified by the user in the functions `myVarOrder` and `myValOrder`, resp.) The `lab` search strategy computes partial solutions to the TOY(FD) goal domain \([X, Y, Z] \in [0, 4], Y \neq 1, Y \neq 3, Z \neq 2\). Then, “rest of TOY(FD) goal” is processed to compute complete solutions. Our search strategy acts over the set of variables \([X, Y, Z]\), but it is only expected to label one of them. ```haskell myVarOrder :: [int] -> int myVarOrder V = fst (foldl cmp (0,0) (zip (take (length V) (from 0)) (map (length . get_dom) V))) myValOrder :: [[int]] -> int myValOrder D = head (last D) | from N = [N | from (N+1)] cmp :: (int,int) -> (int,int) -> (int,int) cmp (I1,V1) (I2,V2) = if (V1 > V2) then (I1,V1) else (I2,V2) ``` TOY(FD)> domain \([X, Y, Z] \in [0, 4], Y \neq 1, Y \neq 3, Z \neq 2\), \[ \text{lab userVar userVal 2 [X,Y,Z], ... (REST OF TOY(FD) GOAL)} \] The function `myVarOrder` selects first the variable with more intervals on its domain. It receives the list of variables involved in the search strategy, returning the index of the selected one. To do so it uses: (i) The auxiliary functions `from` and `cmp`. (ii) The predefined functions `fst`, `foldl`, `zip`, `take`, `length`, `map`, `head`, `last` and `(.)` (all of them with an equivalent semantics as in Haskell). (iii) The reflection function `get_dom`, which accesses the internal state of the solver to obtain the domain of a variable (this domain is presented as a list of lists, where each sublist represents an interval of values). The function `myValOrder` receives as its unique argument the domain of the variable, returning the lower bound of its upper interval. So, in conclusion, the first two (partial) solutions obtained by the `lab` strategy are: \((X \in 0..4, Y \rightarrow 4, Z \rightarrow 3)\) and \((X \in 0..4, Y \rightarrow 4, Z \rightarrow 4)\). Figure 1: Applying labB and fragB to the n-queens problem **Primitive labB** \[ \text{labB} :: \text{varOrd} \rightarrow \text{valOrd} \rightarrow \text{int} \rightarrow \text{[int]} \rightarrow \text{bool} \] This primitive uses the same four basic elements as lab. However, its semantics is different, as it follows the varOrd and valOrd criteria to explore just one branch of the search tree, with no backtracking allowed. Let us explain it by using the 4-queens example. Using \( \text{lab unassignedLeftVar smallestVal 0 [X1,X2,X3,X4]} \) (where 0 in the third argument stands for labeling all the variables) we obtain two solutions: \( X1 \rightarrow 1, X2 \rightarrow 3, X3 \rightarrow 2, X4 \rightarrow 4 \) and \( X1 \rightarrow 2, X2 \rightarrow 4, X3 \rightarrow 1, X4 \rightarrow 3 \). However, if we use \( \text{labB unassignedLeftVar smallestVal 0 [X1,X2,X3,X4]} \) the strategy fails, getting no solutions. Left hand side of Fig. 1 (4 × 4 square Board and tree) shows the computation process. First, the selected criteria assigns \( X1 \rightarrow 1 \) at root node (1), leading to node 2. Propagation reduces search space to \( (X2 \in 3..4, X3 \in 2 \lor 4, X4 \in 2..3) \), pruning nodes 3 and 4. Then, computation assigns \( X2 \rightarrow 3 \) (leading to node 5), and propagation leads to an empty domain for \( X3 \). So, the explored tree path leads to no solutions, and so it does the computation. As we have seen, propagation during search modifies the intended branch to be explored (in our example, it explores the branch 1–2–5 instead of the 1–2–3). **Primitive labW** \[ \text{labW} :: \text{varOrd} \rightarrow \text{bound} \rightarrow \text{int} \rightarrow \text{[int]} \rightarrow \text{bool} \] This primitive performs an exhaustive breadth exploration of the search tree, storing the satisfiable leaf nodes achieved to further sort them by a specified criteria. Let us consider a first example to understand the semantics of labW. The following TOY(FD) goal has four variables, where two implication constraints relate X and Y with V1 and V2, resp. \[ \text{TOY(FD)} > \text{domain } [X,Y] 0 1, \text{post_implication } X (\#=) 1 V1 (\#>) 1, \] \[ \text{domain } [V1,V2] 0 3, \text{post_implication } Y (\#=) 0 V2 (\#>) 0, \] \[ \text{labW unassignedLeftVar smallestSearchTree 2 } [X,Y,V1,V2], ... \] If we had used \( \text{lab unassignedLeftVar smallestVal 2 } [X,Y,V1,V2] \) strategy (instead of the labW one) to label the first two unbound vars of \( [X,Y,V1,V2] \), then the search would have explored the search tree obtaining (one by one) the next four feasible solutions: \( X \rightarrow 0, Y \rightarrow 0, X \rightarrow 0, Y \rightarrow 1, X \rightarrow 1, Y \rightarrow 0 \) and \( X \rightarrow 1, Y \rightarrow 1 \). Fig 3 represents the exploration of the search tree for obtaining those four solutions, where each black node represents a solution, and the triangle it has below represents the remaining size of the search space (product of cardinalities of V1 and V2). As we see, whereas the first solution computed by lab leads to compute the “rest of the TOY(FD) goal” from a 12 candidates search space, the third solution leads to 6 candidates one. The primitive labW explores exhaustively the search tree in breadth, storing in a data structure DS each feasible node leading to a solution. Once the tree has been completely explored, the --- **Improving the Search Capabilities of a CFLP(FD) System** solutions are obtained (one by one) by using a criteria to select and remove the best node from DS. In the example, the selected criteria smallestSearchTree selects first the node with smaller product of cardinalities of V1 and V2 (returning first the solution of the 6 candidates). The order in which the labW strategy of the goal delivers the solutions is presented in Fig. 2. Coming back to the definition of labW, the first parameter represents the variable selection criteria (no value selection is necessary, as the search would be exhaustive for all the values of the selected variables). The second parameter represents the best node selection criteria. To express it we have defined in TOY the enumerated datatype ord, ranging from the smallest/largest remaining search space of the product cardinalities of the labeling/solver-scope variables. Again, a last case (userBound) allows to specify the bound criteria at TOY(FD) level. The third parameter sets the breadth level of exhaustive exploration of the tree (represented as a horizontal black line in Fig. 2). Finally, as usual, the last parameter is the set of variables to be labeled. Next TOY(FD) program (top) and goal (bottom) presents a second example, with a bound criteria specified in the user function myBound. The best node procedure selection traverses all the obtained nodes in DS, selecting first the one with minimal bound value. In this context, the user criteria specified in myBound assigns to each node (minus) the number of its singleton value search variables. Once again, the function myBound also relies on auxiliary, prelude and reflection functions. The first two obtained solutions are \((X \rightarrow 1, Y \rightarrow 1, A \rightarrow 0, B \rightarrow 0, C \rightarrow 0)\) and \((X \rightarrow 2, Y \rightarrow 1, A \in 0..1, B \rightarrow 0, C \rightarrow 0)\), resp. \[ \text{isBound:: } [[\text{int}]] \rightarrow \text{bool} \\ \text{isBound } [[A,A]] = \text{true} \\ \text{isBound } [[A,B]] = \text{false} \iff B /= A \\ \text{isBound } [[A,B] \mid RL] = \text{false} \iff \text{length} RL > 0 \\ \% \text{myBound:: } [\text{int}] \rightarrow \text{int} \\ \text{myBound V = - (length (filter isBound (map get_dom V)))} \] \[ \text{TOY(FD)} > \text{domain } [X,Y] 1 2, \text{domain } [A,B,C] 0 5, A \#< X, B \#< Y, C \#< Y, \text{labW unassignedLeftVar userBound 2 } [X,Y,A,B,C] \] In summary, labW represents a novel concept not available in the predefined search strategies of any CP solver. However, it must be used carefully, as exploring the tree very deeply can lead to an explosion of feasible nodes, producing memory problems for DS and becoming very inefficient (due to the time spent on exploring the tree and selecting the best node). **Primitive labO** \[ \text{labO :: optType } \rightarrow \text{varOrd } \rightarrow \text{valOrd } \rightarrow \text{int } \rightarrow [\text{int}] \rightarrow \text{bool} \] This primitive performs a standard optimization labeling. The first parameter optType contains the optimization type (minimization/maximization) and the variable to be optimized. The other four parameters are the same as in the lab primitive. 2.2 Fragmentize Primitives \[ \text{frag} :: \text{domFrag} \rightarrow \text{varOrd} \rightarrow \text{intervalOrd} \rightarrow \text{int} \rightarrow \{\text{int}\} \rightarrow \text{bool} \\ \text{fragB} :: \text{domFrag} \rightarrow \text{varOrd} \rightarrow \text{intervalOrd} \rightarrow \text{int} \rightarrow \{\text{int}\} \rightarrow \text{bool} \\ \text{fragW} :: \text{domFrag} \rightarrow \text{varOrd} \rightarrow \text{bound} \rightarrow \text{int} \rightarrow \{\text{int}\} \rightarrow \text{bool} \\ \text{fragO} :: \text{domFrag} \rightarrow \text{optType} \rightarrow \text{varOrd} \rightarrow \text{intervalOrd} \rightarrow \text{int} \rightarrow \{\text{int}\} \rightarrow \text{bool} \] These four new primitives are mate to the lab* ones, but each variable is not labeled (bound) to a value, but fragmented (pruned) to a subset of the values of their domain. Let us consider an introductory example to motivate the usefulness of these new primitives. We suppose that: (i) A goal contains \( V \) variables and \( C \) constraints, with \( V' \equiv \{V_1, V_2, V_3\} \) a subset of \( V \). (ii) The constraint domain \( V' \neq 1 \) belongs to \( C \). (iii) No constraint of \( C \) relates the variables of \( V' \) by themselves, but some constraints relate \( V' \) with the rest of variables of \( V \). The left and right hand sides of Fig. 3 present the search tree exploration achieved by \text{frag}* and \text{lab}* search primitives, resp. In the case of \text{frag*}, the three variables of \( V' \) have been fragmented into the intervals \((1, \ldots, 3), (4, \ldots, 6)\) and \((7, \ldots, 9)\), leading to exponentially less leaf nodes \( 27 \) that the \text{lab*} exploration \( 729 \). On the one hand, if it is known that there is only one solution to the problem, the probabilities of finding the right combination of \( V' \) values is thus bigger in \text{frag*} than in \text{lab*}. On the other hand, the remaining search space of the leaf nodes of \text{lab*} are expected to be exponentially smaller than the ones of \text{frag*}, due to the more propagation in \( V' \) (also expecting to lead to more pruning in the rest of \( V \) variables). Thus, the \text{frag*} search strategies can be seen as a more conservative technique, where there are less expectations of highly reducing the search space, as variables are not bound, but there is more probability of choosing a subset containing values that lead to solutions (in what can be seen as a sort of generalization of the \textit{first-fail} principle [MS98]). Coming back to the definition of each \text{frag*} primitive, two differences arise w.r.t. its mate \text{lab*} primitive: (1) It contains as an extra basic component (first argument) the datatype \text{domFrag}, which specifies the way the selected variable is fragmented. The user can choose between partition \( n \) and intervals. The former fragments the domain values of the variable into \( n \) subsets of the same cardinality. The latter looks for already existing intervals on the domain of the variables, splitting the domain on them. For example, let us suppose that a goal computes domain \([X] 0 \ 16, \ X / \neq 9, \ X / \neq 12\). Whereas applying ![Figure 3: frag vs lab Search Tree](image-url) 2.3 Applying Different Search Scenarios ToY(FD) supports non-deterministic functions, with possibly several reductions for given, even ground, arguments. The rules are applied following their textual order, and both failure and user request for a new solution trigger backtracking to the next unexplored rule. In this setting, different search strategies can be sequentially applied for solving a CP problem. For example, after posting V and C to the solver, the ToY(FD) program (top) and goal (bottom) presented below uses the non-deterministic function \( f \) to specify three different scenarios for the solving of the CP problem described in Section 3.5. Each scenario ends with an exhaustive labeling of the set of variables V. However, the search space \( s \) this exhaustive labeling has to explore can be highly reduced by the previous evaluation of \( f \). \[ f:: \mathbb{[}\text{int}\mathbb{]} \rightarrow \text{bool} \\ f \{V_1,V_2,V_3\} = \text{true} \iff \\ \quad \text{fragB \ (partition 4) unassignedLeftVar random 0 [V1],} \\ \quad \text{labB unassignedLeftVar smallestVal 0 [V2,V3]} \\ f \{V_1,V_2,V_3\} = \text{true} \iff \\ \quad \text{fragW \ (partition 4) unassignedLeftVar smallestTree 0 [V1],} \\ \quad \text{labB unassignedLeftVar smallestTotalVars 0 [V2,V3]} \\ f \{V_1,V_2,V_3\} = \text{true} \] ----------------------------- ToY(FD)> Post of \((V,C)\), \( f \ V' \), lab userVar userVal 0 V **Scenario 1:** The first rule of \( f \) performs the search heuristic \( h_1 \) over \( V' \equiv \{V_1,V_2,V_3\} \). (1) \( h_1 \) fragments the domain of \( V_1 \) into 4 subsets, selecting one randomly. If propagation succeeds, (2) then \( h_1 \) bounds \( V_2 \) and \( V_3 \) to their smallest value. If propagation succeeds (with a remaining search space \( s_1 \)), (3) then \( h_1 \) succeeds, and the exhaustive labeling explores \( s_1 \). If propagation fails in (1) or (2), or the exhaustive labeling does not find any solution in \( s_1 \), then \( h_1 \) completely fails (as so the first rule of \( f \)), as both the labB and fragB primitives just explore one branch. **Scenario 2:** The second rule of \( f \) is tried, performing the heuristic \( h_2 \) over \( V' \). Here a fragW primitive is first applied. So, if further either labB of \( h_2 \) or the exhaustive lab (acting over \( s_2 \)) fails, backtracking is done over fragW, providing the next best interval of \( V_1 \) (according to the smallest search tree criteria, as in Fig. 2). If, after trying all the intervals a solution is not found, then \( h_2 \) completely fails (as so the second rule of \( f \)). **Scenario 3:** If both \( h_1 \) and \( h_2 \) fail, the third rule of \( f \) trivially succeeds, and the exhaustive labeling is performed over the original search space obtained after posting \( V \) and \( C \) to the solver. 3 Search Primitives Implementation The integration of the eight new search primitives into TOY(FD) is based on the scheme presented in [CS12], which supported the coexistence in the goal computations of TOY(FDg) and TOY(FDi) of multiple labeling primitives (using the predefined search strategies provided in Gecode and ILOG) interleaved with the posting of constraints. To achieve the integration, the scheme: (1) Uses the Prolog-C++ connection provided by SICStus, to gain access from the Prolog predicate which manages the labeling primitive to the C++ function which implements the search (by accessing to the API of Gecode and ILOG). (2) Relies on a vector of auxiliary search managers ss_1 ... ss_l to perform the search of the labelings l_1 ... l_l arisen along the goal computation. This includes synchronizing the constraint store of the main solver with ss_l before performing l_l for first time, and vice versa each time ss_i finds a solution. In this work we reuse this scheme, but, instead of relying on the predefined search strategies of Gecode and ILOG, we use their underlying search mechanisms to model new search strategies implementing the intended behavior of the primitives. As the implementation of a new search strategy is different in Gecode and ILOG, we present first an abstract specification of the requirements the new search strategies must fulfill, and then we present separately the adaptation of this specification to the technological framework provided by each library. The current versions of TOY(FDg) and TOY(FDi) are available at: http://gpd.sip.ucm.es/ncasti/TOY(FD).zip 3.1 Abstract Specification of the Search Strategy We specify a single entry point (C++ function) for the different primitives. Its proposed algorithm is parameterizable by the primitive type and its basic components. It fulfills the following requirements: (1) The algorithm explores the tree by iteratively selecting a variable var and a value v, creating two options: (a) Post var == v. (b) Post var /= v to continue the exploration taking advantage of the previously explored branch, recursively selecting another value to perform again (a) and (b). (2) For frag* strategies it selects an interval i instead of a value, posting in (a) both var #>= i.min and var #<= i.max. However, the (b) branch can not take advantage by posting var #< i.min and var #> i.max, as the constraint store would become inconsistent. Thus, (b) just removes i from the set of intervals, and continue the search by selecting a new interval. (3) For labB and fragB strategies only the (a) option is tried. (4) For labO and fragO strategies branch and bound techniques are used to optimize the search. (5) Specific functions are devoted to: (i) Variable and (ii) value/interval selection strategies, as well as to (iii) the bound associated to a particular solution found by labW and fragW. Those functions include the possibility of accessing Prolog, to follow the criteria the user has specified at TOY(FD) level (using TOY(FD) functions compiled to mate Prolog predicates). (6) The primitives labW and fragW perform the breadth exploration of the upper levels of the search tree, storing all the satisfiable leaf nodes to further give them (one by one) on demand. Thus, ss contains: (i) An entity performing the search, (ii) A vector DS (cf. Section 2.3) containing the solutions. The notion of solution is abstracted as the necessary information to perform the synchronization from ss to the main constraint solver. (iii) A status indicating whether the exploration has finished or not. (7) The algorithm finishes (successfully) as it founds a solution, except for labW and fragW strategies, where it stores the solution node and triggers an explicit failure, continuing the breadth exploration of the tree. (8) A counter is used to control that only the specified amount of variables of the variable set is labeled/pruned. Next two sections adapt this specification to Gecode and ILOG Solver, resp. Table 1 sums- marizes the different notions provided by both libraries. <table> <thead> <tr> <th>Search Concept</th> <th>Gecode</th> <th>ILOG Solver</th> </tr> </thead> <tbody> <tr> <td>Search trigger</td> <td>Search Engine</td> <td>IloGoal stack</td> </tr> <tr> <td>Tree node</td> <td>Space</td> <td>IloGoal attributes</td> </tr> <tr> <td>Node exploration</td> <td>Brancher Commit</td> <td>IloGoal execution</td> </tr> <tr> <td>Child generation</td> <td>Brancher Choice</td> <td>IloGoal constructor</td> </tr> <tr> <td>Solution check</td> <td>Brancher Status</td> <td>Stack with no pending IloAnd</td> </tr> <tr> <td>Solution abstraction</td> <td>Space</td> <td>Tree path (var,value) vector register</td> </tr> </tbody> </table> Table 1: Different Search Concept Abstractions in Gecode and ILOG Solver 3.2 Gecode We have selected Gecode 3.7.3 as the external solver for TOY(FDg) as it is a free software constraint solver with state-of-the-art performance. Search strategies in Gecode are specified via Branchers, which are applied to the constraint solver (Space) to define the shape of the search tree to be explored. The Space is then passed to a Search Engine, whose execu- tion method looks for a solution by performing a depth-first search exploration of the tree. This exploration is based on cloning Spaces (two Spaces are said to be equivalent if they con- tain equivalent stores) and hybrid recomputation techniques to optimize the backtracking. As Spaces constitute the nodes of the search tree, a solution found by the Search Engine is a new Space. The library allows to create a new class of Brancher by defining three class methods: (1) status, which specifies if the current node is a solution, or their children must be generated, to continue with their exploration. (2) choice, which generates an object o con- taining the number of children the node has, as well as all the necessary information to perform their exploration. (3) commit, which receives o and the concrete children identifier to perform its exploration (generating a new Space to be placed at that node). Adaptation to the Specification. The search strategies are implemented via two layers: (I) A new class of Brancher MyGenerate, which carries out the tree exploration by the combi- nation of the status, choice and commit methods. As each node of the tree is a Space, the methods are applied to it. (II) A Search Engine, controlling the search by receiving the initial Space and making the necessary clones to traverse the tree. Regarding (1), the choice method deals with the selection of: (i) The variable var and (ii) The value v, creating an object o with them as parameters, as well as the notion of having two children. The variable selec- tion must rely on an external register r, being controlled by the Search Engine and thus independent on the concrete node (Space) the choice method is working with. The register is necessary to ensure that, whether a father generates its right hand child by posting var /= v, this child will reuse r to select again var (as a difference to the left hand child, which re- moves the r content to select a new variable). Regarding (2), for frag* strategies, instead of passing val to o, the choice method generates a vector with all the different intervals to be tried, and the size of this vector is passed as its number of children. Regarding (3), for labB and fragB only one child is considered. Regarding (4), for labO and fragO we use a specialized branch and bound Search Engine provided by Gecode. Regarding (6), the search entity is the Search Engine and the notion of solution is a Space. Regarding (7), for labW and fragW the Search Engine uses a loop, requesting solutions one by one until no more are found (the breadth exploration of the search tree has finished). Regarding (8), only the left hand child of lab* strategies increments the counter value, and the status method checks the counter to stop the search at the precise moment. 3.3 ILOG Solver We have selected IBM ILOG Solver 6.8 as the external solver for TOY (FD) as it is an industrial market leader for solving generic FD problems. It belongs to the ILOG CP 1.6 package, which contains the ILOG Concert 12.2 modeling library and two other solver libraries for specific scheduling and routing problems. Thanks to the IBM academic initiative these products are free for academic purposes. Search strategies in ILOG Solver are performed via the execution of IloGoals. An IloGoal is a daemon characterized by its constructor and its execution method. The constructor creates the goal, initializing its attributes. The execution method triggers the algorithm to be processed by the constraint solver (IloSolver), and can include more calls to goal constructors, making the algorithm processed by IloSolver to be the consequence of executing several IloGoals. We say that an IloGoal fails if IloSolver becomes inconsistent by running its execution method; otherwise the goal succeeds. The library allows to create a new class of IloGoal by defining its constructor and execution method. Four basic goal classes are provided for developing new goals with complex functionality. Goals IlcGoalTrue and IlcGoalFalse make the current goal succeed and fail, resp. Goals IlcAnd and IlcOr, both taking two subgoals as arguments, make the current goal succeed depending on the behavior of its subgoals. While IlcAnd succeeds only if its two subgoals succeed, IlcOr creates a restorable choice point which executes its first subgoal, restores the solver state at the choice point on demand, and executes its second subgoal. Adaptation to the Specification. The search strategies are implemented via the new IloGoal classes MyGenerate and MyInstantiate. Whereas the former deals with the selection of a variable, the latter deals with its binding/prunning to a value/interval. Regarding (1), the control of the tree exploration is carried out by MyGenerate, which selects a variable and uses the recursive call IlcAnd (MyInstantiate, MyGenerate) to bind it and further continue processing a new variable. In MyInstantiate, the alternatives (a) and (b) are implemented with an IlcOr (var == val, IlcAnd (var /= var, MyInstantiate)). Regarding (2), it dynamically generates a vector with the available intervals on each different MyGenerate call. Regarding (3), only the goal var == val is tried. Regarding (4), we explicitly implement the branch and bound. Thus, before selecting each new variable, we check if the current optimization variable can improve the bound previously obtained; otherwise an IloGoalFail is used to trigger backtracking (as well as if, after labeling the required variables, the obtained solution does not bind the optimization variable). Regarding (6), the entity performing the search is an IloSolver. Also, the notion of solution is given by: (i) A vector of integers, representing the indexes of the labeled/pruned variables. (ii) A vector of pairs, representing the assigned value or bounds of these variables. This explicit solution entity is built towards the recursive calls of MyGenerate, which adds on each call the index of the variable being labeled. Once found the solution, it stores (i) and (ii) in DS. Regarding (7), after storing a solution in labW or fragW an IloGoalFalse is used, triggering backtracking to continue the breadth exploration. Regarding (8) each call to MyGenerate increments the counter value. 3.4 TOY(FD) Architecture The resulting TOY(FD) architecture supporting the search primitives is presented in Fig. 4. It contains five different layers: (1) The TOY(FD) interpreter. It allows to impose the user commands. In Fig. 4 the goal proposed in Section 3.1 is to be solved. Besides its FD constraints domain and /=, there is a lab strategy. The user is specifying its own variable and value selection criteria by Figure 4: TOY(FD) Architecture using the functions myValOrder and myVarOrder, resp., which rely on auxiliary, prelude and reflection functions (as, for example, from, get_dom and map, respectively). (2) The FODY(FD) files defining the language. They include: (i) A file Prelude.toy, to specify the prelude functions (as map), (ii) A devoted file FD.toy, specifying the set of FD constraints supported, and (iii) A file MyProgram.toy with the user definitions (including from and myVarOrder). (3) The SICStus implementation of FODY(FD). It includes Prolog mate files for (i), (ii) and (iii), implementing all the FODY(FD) datatypes, functions and operators supported by the system and defined by the user. The file solver.pl supports the communication from SICStus to C++ by specifying the set of SICStus predicates S being implemented in C++ functions. (4) The C++ interface to the solver. It includes: (a) The file solver.cpp, containing the set of C++ functions implementing S, and (b) The auxiliary files containing the extra C++ functions, data structures and new solver specific classes extending the library (needed to implement S). This includes the new C++ class MyGenerate in Gecode and ILOG Solver (the latter also including MyInstantiate). They are devoted to implement the lab strategy, and contain methods for the variable and value selection. Fig. 4 shows how these methods can access either to the solver API (if a predefined criteria is being selected), or (focusing on the variable selection criteria) come back to the SICStus file MyProgram.pl, executing the SICStus predicate myVarOrder (implementing the user FODY(FD) function myVarOrder). In our example, the latter case holds, and we can see how the execution traverses the SICStus and C++ layers, as a cycle is performed between the SICStus predicate myVarOrder, its auxiliary SICStus predicate get_dom (which belong to S) and its C++ implementation in solver.cpp. (5) The C++ API of the solver. Accessed by the C++ methods interfacing the solver. In the case of Gecode, this layer also includes the proper solver implementation, as it is open software. 4 Performance Analysis In this section we present a preliminary although encouraging performance analysis of the FODY(FD) search primitives, devoting a specific case study for each novel concept already presented. On each case we select a constraint satisfaction/optimization problem (either a pure classical CP(FD) benchmark from the CSPLib [CSP12] or a real-life problem) and we describe how the use of a concrete search strategy increases the solving efficiency of the problem. labB: n-magic_sequence. When n ≥ 9 the sequence follows the pattern: L ≡ [(n−4),2,1,0, 0,.,.,1,0,0,0]. Although the use of first-fail (labelling [ff] L) turns the solving of any n-sequence into ≈ 0ms, the initial search space this search departs from can be huge. For example, n = 9 contains an initial search space of 77,760 candidates. In this context, for each n ≥ 9, applying before labB unassignedRightVar smallestVal 3 L, labB unassignedRightVar largestVal 1 L captures the last four variable 1,0,0,0 pattern, whose propagation lead to L ≡ [(n−4),A,B,C,0,.,.,1,0,0,0] (with A in 1..3, B in 0..1 and C in 0..1), dramatically reducing the search space the labelling has to deal with. However, we are interested in examples in which search space reduction (because of the application of our search strategies) also implies a better solving efficiency for the problem. Employee Timetabling Problem. The real-life problem [CS13] optimizes the timetabling of a department. Relying on the seed approach presented in [R. 07] for solving a former non- optimization version of the problem, we use now the labB strategy to find an optimal seed, i.e., a variable-subset-binding (a different one for each of the independent subproblems being solved) which matches the one of the optimal timetabling solution. For example, for the 21 timetabling instance of the [CS13] T OY model, applying labB unassignedLeftVar smallestSearchTree 2 L (extract TCal) before performing the labeling of each subproblem: (1) Finds an optimal seed, (2) Reduces the solving time of the problem to a 94% in Gecode. labW: langford’s number problem. A deep breath exploration of the search tree with labW supposes a tradeoff between: (1) Obtaining an ordered hierarchy of interesting intermediate tree-level nodes and (2) The computational effort to obtain such this hierarchy. In this context, applying labW smallestMinValueVar smallestSearchTree 3 L to langFord (2,4) directly finds a solution, i.e., not even a further labeling is necessary. However, the time labW spends is bigger than the one of running labeling to the whole search tree, so the use of labW does not pay-off. Fortunately it does when applying labW largestMinRegretVar smallestSearchTree 2 L to langFord (3,19), where the use of labW does not find directly a solution, but the sum of time for obtaining the hierarchy and applying the labeling to the remaining space takes a 65% of time less in Gecode than applying straight the labeling. fragB: n-queens. The formulation of the n-queens problem based on global (all different) constraints becomes much more efficient than the one using single disequality constraints, with some n-queens instances for which the former finds a solution in a few seconds whereas the latter can not find anyone after hours. The right hand side of Fig. 1 (cf. Section 2.1) presents an intuitive way for reducing the initial search space of the problem: (1) Split the n variables into k variable sets (v1, v2, . . . , vn), (2) Split the initial domain 1 . . . n into k different intervals (1..(n/k), . . . , (n/k)∗(i−1) + 1..(n/k)∗i, . . . , (n/k)∗(k−1) + 1..n), (3) Assign the variables of vsi to the ith interval. The application of split into 3 L ([], [0], [1]) == (K1,K2,K3), fragB (partition 3) unassignedLeftVar firstRight 0 K1, fragB (partition 3) unassignedLeftVar firstMiddle 0 K2, fragB (partition 3) unassignedLeftVar firstLeft 0 K3 implements the approach with k = 3 sets, solving the 75-queens instance in just one second in Gecode (whereas it is not solved after twelve hours without using the strategy). fragW: n-Golomb rulers. The classical formulation of the problem leads to a huge initial search space. The initial domain of the last three rulers in 11-Golomb is H in 36..1020, I in 45..1021 and J in 55..1023 (with know optimal solution 64, 70 and 72, resp.) whereas the one of the first three rulers is 0, A in 1..977 and B in 3..987 (with know optimal solution 0, 1 and 4, resp.) In this context, an intuitive way of reducing the initial search space is by reducing so much the upper bound of these variables. Applying fragW (partition 3) unassignedRightVar smallestSearchTree 3 L, fragW (partition 15) unassignedLeftVar largestSearchTree 2 L fragments first the last three variables and then the first three. Note that, whereas the former selects as best intermediate node the one minimizing the remaining search space, the latter select the one maximizing it (which intuitively makes sense, as the smaller interval is the one pruning the less the upper bound of the first three variables). The use of these strategies reduces the solving time of the problem to a 88% in Gecode. Results: The obtained results are summarized in Table 2. The first column represents the problem being solved. Next two blocks of three columns represent respectively the results of Gecode and ILOG: Elapsed time (measured in milliseconds) without/with using the strategy and <table> <thead> <tr> <th>Problem</th> <th>Gecode (ms)</th> <th>ILOG (ms)</th> <th>Improvement</th> </tr> </thead> <tbody> <tr> <td>11-Golomb</td> <td>2,490</td> <td>2,490</td> <td>0%</td> </tr> <tr> <td>21-timetabling</td> <td>7,040</td> <td>7,040</td> <td>0%</td> </tr> <tr> <td>n-queens 10</td> <td>102,000</td> <td>102,000</td> <td>0%</td> </tr> <tr> <td>n-queens 20</td> <td>1,000,000</td> <td>1,000,000</td> <td>0%</td> </tr> <tr> <td>n-queens 30</td> <td>10,000,000</td> <td>10,000,000</td> <td>0%</td> </tr> </tbody> </table> Table 2: Case Studies of TOY (FD) Search Strategies Application <table> <thead> <tr> <th>Problem</th> <th>G</th> <th>G*</th> <th>G*/G</th> <th>I</th> <th>I*</th> <th>I*/I</th> <th>G/I</th> <th>G*/I*</th> </tr> </thead> <tbody> <tr> <td>ETP</td> <td>24,710</td> <td>1,465</td> <td>0.06</td> <td>54,351</td> <td>4,570</td> <td>0.08</td> <td>0.45</td> <td>0.32</td> </tr> <tr> <td>Langford</td> <td>624</td> <td>218</td> <td>0.35</td> <td>1,014</td> <td>827</td> <td>0.82</td> <td>0.62</td> <td>0.26</td> </tr> <tr> <td>Golomb</td> <td>42,869</td> <td>5,320</td> <td>0.12</td> <td>75,365</td> <td>9,687</td> <td>0.13</td> <td>0.57</td> <td>0.55</td> </tr> <tr> <td>Queens</td> <td>-</td> <td>1,622</td> <td>≃ 0.00</td> <td>-</td> <td>4,306</td> <td>≃ 0.00</td> <td>≃ 0.00</td> <td>≃ 0.00</td> </tr> </tbody> </table> The results show that the use of the search strategies improve the solving performance of the case studies both in Gecode and ILOG Solver (making them from 1.25 to 20 times faster, or even solving instances that were not possible before). However, the impact of the search strategies is not the same in both systems, i.e., Gecode is faster than ILOG Solver both with/without the search strategies, but the ratio is bigger when using the search strategies. As both systems are running exactly the same TOY (FD) model, we claim that the approach Gecode offers to extend the library with new search strategies is more efficient than the ILOG Solver one. Benchmarks are run in a machine with an Intel Dual Core 2.4Ghz processor and 4GB RAM memory. The OS used is Windows 7 SP1. The SICStus Prolog version used is 3.12.8. Microsoft Visual Studio 2008 tools are used for compiling and linking the TOY (FDi) and TOY (FDg) C++ code. The different models being used as case studies are available at: http://gpd.sip.ucm.es/ncasti/models.zip. 5 Conclusions and Future Work We have described the integration of new parametric search primitives in the systems TOY (FDg) and TOY (FDi). Our approach benefits both from the high expressivity of TOY (FD) and of the high efficiency of Gecode and ILOG Solver, and can be easily adapted to other CLP or CFLP systems implemented in Prolog and interfacing external CP(FD) solvers with a C++ API. We have described the primitives, pointing out novel concepts they include, as perform an exhaustive breadth exploration of the search tree further sorting the satisfiable solutions by an specified criteria, fragment the variables pruning each one to a subset of its domain values instead of binding it to a single value, and applying the labeling or fragment strategy only to a subset of the variables involved. We have seen how expressive, easy and flexible it is to specify some search criteria at TOY (FD) level, as well as how easy is to combine some search strategies to set different search scenarios. We have described an abstract view of the eight requirements needed to integrate the search strategies in TOY (FD). We have presented the implementation in Gecode and ILOG Solver, by matching each abstract concept to the concrete one provided in the library. We have seen the resulting architecture of the system, pointing out its five layers and the interaction between them. We have presented five case studies (using classical CP(FD) benchmarks and a real-life problem) to point out that the use of the search strategies improve the TOY (FDg) and TOY (FDi) solving performance, and that the approach Gecode offers to extend the library with new search strategies is more efficient than the ILOG Solver one. As future work, we will use scripting for applying the search strategies to classical CP(FD) benchmarks under multiple and very precisely controlled scenarios. We will use data mining techniques over the obtained results, to find out some patterns about the relation between the structure of a problem and the concrete search strategies to be applied. **Bibliography**
{"Source-Url": "http://www.fdi.ucm.es/profesor/fernan/FSP/CS13c.pdf", "len_cl100k_base": 11724, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 50640, "total-output-tokens": 13221, "length": "2e13", "weborganizer": {"__label__adult": 0.000286102294921875, "__label__art_design": 0.0003123283386230469, "__label__crime_law": 0.00036787986755371094, "__label__education_jobs": 0.0007100105285644531, "__label__entertainment": 6.794929504394531e-05, "__label__fashion_beauty": 0.00013935565948486328, "__label__finance_business": 0.0002682209014892578, "__label__food_dining": 0.0002892017364501953, "__label__games": 0.0007433891296386719, "__label__hardware": 0.0007176399230957031, "__label__health": 0.00033783912658691406, "__label__history": 0.00025773048400878906, "__label__home_hobbies": 9.065866470336914e-05, "__label__industrial": 0.0005021095275878906, "__label__literature": 0.00018966197967529297, "__label__politics": 0.0002613067626953125, "__label__religion": 0.0004625320434570313, "__label__science_tech": 0.032562255859375, "__label__social_life": 8.660554885864258e-05, "__label__software": 0.00893402099609375, "__label__software_dev": 0.95166015625, "__label__sports_fitness": 0.0002880096435546875, "__label__transportation": 0.0004916191101074219, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47860, 0.03284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47860, 0.29871]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47860, 0.84209]], "google_gemma-3-12b-it_contains_pii": [[0, 2883, false], [2883, 6462, null], [6462, 9593, null], [9593, 13062, null], [13062, 16085, null], [16085, 19522, null], [19522, 22374, null], [22374, 26121, null], [26121, 29402, null], [29402, 33105, null], [33105, 34103, null], [34103, 37737, null], [37737, 42079, null], [42079, 45298, null], [45298, 47860, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2883, true], [2883, 6462, null], [6462, 9593, null], [9593, 13062, null], [13062, 16085, null], [16085, 19522, null], [19522, 22374, null], [22374, 26121, null], [26121, 29402, null], [29402, 33105, null], [33105, 34103, null], [34103, 37737, null], [37737, 42079, null], [42079, 45298, null], [45298, 47860, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47860, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47860, null]], "pdf_page_numbers": [[0, 2883, 1], [2883, 6462, 2], [6462, 9593, 3], [9593, 13062, 4], [13062, 16085, 5], [16085, 19522, 6], [19522, 22374, 7], [22374, 26121, 8], [26121, 29402, 9], [29402, 33105, 10], [33105, 34103, 11], [34103, 37737, 12], [37737, 42079, 13], [42079, 45298, 14], [45298, 47860, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47860, 0.10145]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
98edf79baeda5f2badb79985a31d6ee86647a9e1
IMPROVING THE EFFICIENCY OF TESSERACT OCR ENGINE Sahil Badla San Jose State University Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Computer Sciences Commons Recommended Citation DOI: https://doi.org/10.31979/etd.5avd-kf2g https://scholarworks.sjsu.edu/etd_projects/420 This Master's Project is brought to you for free and open access by the Master's Theses and Graduate Research at SJSU ScholarWorks. It has been accepted for inclusion in Master's Projects by an authorized administrator of SJSU ScholarWorks. For more information, please contact scholarworks@sjsu.edu. IMPROVING THE EFFICIENCY OF TESSERACT OCR ENGINE A Writing Project Presented to The Faculty of the Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree Master of Science By Sahil Badla May 2014 SAN JOSÉ STATE UNIVERSITY The Undersigned Project Committee Approves the Project Titled Improving the efficiency of Tesseract OCR engine by Sahil Badla APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE Dr. Teng Moh Department of Computer Science Dr. Melody Moh Department of Computer Science Prof. Ronald Mak Department of Computer Science APPROVED FOR THE UNIVERSITY Associate Dean Office of Graduate Studies and Research [Signature] [Date] ABSTRACT IMPROVING THE EFFICIENCY OF TESSERACT OCR ENGINE By Sahil Badla This project investigates the principles of optical character recognition used in the Tesseract OCR engine and techniques to improve its efficiency and runtime. Optical character recognition (OCR) method has been used in converting printed text into editable text in various applications over a variety of devices such as Scanners, computers, tablets etc. But now Mobile is taking over the computer in all the domains but OCR still remains one not so conquered field. So programmers need to improve the efficiency of the OCR system to make it run properly on Mobile devices. This paper focuses on improving the Tesseract OCR efficiency for Hindi language to run on Mobile devices as there a not many applications for the same and most of them are either not open source or not for mobile devices. Improving Hindi text extraction will increase Tesseract's performance for Mobile phone apps and in turn will draw developers to contribute towards Hindi OCR. This paper presents a preprocessing technique being applied to the Tesseract Engine to improve the recognition of the characters keeping the runtime low. Hence the system runs smoothly and efficiently on mobile devices (Android) as it does on the bigger machines. ACKNOWLEDGEMENTS I would like to thank Dr. Teng Moh for his guidance. His insight, advice and guidance throughout the project was invaluable. I also thank Dr. Melody Moh and Prof. Ronald Mak for serving on my defense committee. # Table of Contents List of Figures ................................................................................................. 07 List of Tables .................................................................................................. 08 Chapter 1 Introduction .................................................................................. 09 Chapter 2 Tesseract OCR overview ............................................................ 12 2.1 Introduction to Tesseract OCR .............................................................. 12 2.2 Type .................................................................................................... 12 2.3 Architecture ......................................................................................... 12 2.4 Working of Tesseract ........................................................................... 14 Chapter 3 Previous Work ........................................................................... 16 3.1 Literature Review .................................................................................. 16 3.2 Current technology and limitations ...................................................... 16 3.3 Various types of architectures ............................................................... 20 3.4 Types of preprocessing steps available ............................................... 21 3.5 Existing Applications on App Store ...................................................... 23 Chapter 4 Implementation .......................................................................... 26 4.1 Setup .................................................................................................... 26 4.1.1 Running Tesseract ......................................................................... 26 4.2 Architecture ......................................................................................... 28 4.3 Tesseract Android tools ....................................................................... 32 4.4 Preprocessing Step ................................................................................ 32 4.5 DPI enhancement .................................................................................. 34 4.6 Combined Picture .................................................................................. 37 Chapter 5 Result ........................................................................................... 40 5.1 Experiment Result ................................................................................ 40 5.2 Conclusion ............................................................................................ 43 5.3 Examples............................................................................................... 44 5.4 Future Work and scope ....................................................................... 46 References .................................................................................................... 47 List of Figures Fig 1: Tesseract flow........................................................................................................13 Fig 2: Image having text...................................................................................................14 Fig 3: Output of the OCR in text file................................................................................15 Fig 4: Hindi fonts.............................................................................................................27 Fig 5: Box file..................................................................................................................28 Fig 6: Architectural flow..................................................................................................28 Fig 7: Processing vs. Accuracy..........................................................................................29 Fig 8: Preprocessing Architecture and steps......................................................................30 Fig 9: Android application flow.......................................................................................31 Fig 10: Colored Image......................................................................................................32 Fig 11: Output of Luminosity vs. simple gray scale.......................................................33 Fig 12: DPI enhancement...............................................................................................35 Fig 13: Edge enhancement..............................................................................................35 Fig 14: Main Activity........................................................................................................37 Fig 15: Choose from gallery............................................................................................37 Fig 16: OCR result..........................................................................................................38 Fig 17: App data................................................................................................................38 Fig 18: Trained Data in SD Card.....................................................................................38 Fig 19: ATMA results.......................................................................................................41 Fig 20: Shirorekha results..............................................................................................42 Fig 21: result from paper.................................................................................................43 Fig 22: test image.............................................................................................................44 Fig 23: test image.............................................................................................................44 Fig 24: test image.............................................................................................................44 List of Tables Table 1: result 1 ........................................................................................................40 Table 2: result 2 ........................................................................................................41 Table 3: result 3 ........................................................................................................42 Table 4: conclusion .....................................................................................................43 Chapter 1 - Introduction Now that we have entered Web 3.0, the main players in it are mobile and more mobile devices. Anything you could imagine is available for mobile. Similarly mobile application pool has grown tremendously because of high availability and also the portable nature of mobile phones and public API's. It's easier to carry them all the time instead of carrying your computer or laptop. Consider a scenario when you are out in some country with your family and the native language there is not English, how would sightseeing look like? A person cannot carry a translation book with himself to look up for signs and notice boards. That would ruin the joy of sightseeing. What if we have a device which acts as a ready translator and has a camera so that you just point it on the text and you get the translated text right away, wouldn't it make several lives easier. The inspiration for this project is something similar to this scenario. People who belong to India face a similar issue. In India, which has many regional languages, travelling to other states and roaming around has always been a problem. One can't always hire a guide for the whole trip. Hindi language is the focus as it is the national language of India. With more than a billion people living and more than half a billion mobile phones, India lacks mobile phone applications focusing on Hindi and related applications. This led to the research for applications that people could use as ready translators in the handheld devices and we all know that device is none other than a Smartphone. Almost everyone has a Smartphone these days. Now the problem was to find the missing pieces and address the problem to a solution. Next step is to research about the applications which could extract text from images and that came out to be "Optical Character Recognition" applications also know as OCR. This is the start to the building of a system which will then be ported over to Android platform where the Hindi (national language of India, originated from Sanskrit) text could be extracted. This would be "the" application for the visitors travelling to India. One more important thing is that it would be available on the mobile devices as an open source project. The inspiration for this project is based on the fact that there are not many applications for Hindi OCR and very few of them are available for mobile phones. What makes this application stand apart from other OCR applications apps is that it is open source, runs on mobile phones and addresses a very common problem which more than a million people in India are facing. This application is a starting point for a lot of similar applications that require OCR and it is dedicated for Hindi. It has numerous use cases such as translation applications for visitors and tourists, educational applications for students/teachers/kids, regional applications etc. OCRs for personal computers is a fairly common thing and is being used in various domains. Making it open source allows this potential field to be more discovered and contributed by developers from all over the world. However the scope of this project is limited to efficient extraction of the text only and translation would be out of scope. Let us start from the various components involved in this system. Optical character Recognition (OCR) is used in conversion of scanned, printed text images [1] or handwritten text into editable text for further processing and analysis. OCR allows the machine to recognize the text automatically. OCR has been used since early 90's in various types of machines and is improving gradually. We could experience various problems while developing an OCR system. First: Computer has to know what characters look like. There is little visible difference between letters and digits. For instance it is difficult to differentiate between digit “0” and letter “o” for the machine. Second: It is even difficult to differentiate foreground text from background text and other content. Let's look back in the history where it all started. The first OCR system was installed in 1955 at the reader’s digest, which used to OCR input sales report into a computer. After that the OCR became very helpful in computerizing any manual or physical task relating to documents [Patel 2012]. OCR is being used widely for various purposes which includes: License plate recognition systems in various countries at toll stations, roads & CCTV's, image text extraction from natural scene images [21], scanners and extracting text from scanned documents, cards, printers [12], etc [21]. The proposed system is a faster OCR system which has another step added as a pre processor which could increase the efficiency and the time to complete the task. We can see a lot of variations of OCR systems such as check scanning applications for Bank of America and Chase Bank which are used to add checks to your account by just taking a picture of them. The application extracts the details and sends it over to the server for transaction. Similar application for tax payments was developed by Turbo Tax to easily pay tax via Smartphone. Various language translation applications are there on the application store (Android, Apple & Windows) which extract and translate & then narrate the text. There is a whole range of OCR software available today in the markets like: Desktop OCR, Server OCR, Web OCR, Mobile OCR etc. Accuracy of extracting text of any of these OCR tool varies from 71% to 95% [Patel 2012]. Many OCR tools are available as paid and work really well but only few of them are open source and free. Tesseract is basically Java code, so that makes it platform independent. Just like any other open source package, this can be forked easily. This is the best part about being open source. In the subsequent sections discuss more about Tesseract and its architecture and the groundwork for this project. Chapter 2 - Tesseract OCR overview 2.1 Introduction to Tesseract OCR An Overview of the Tesseract OCR Engine describes Tesseract as: "Tesseract is an open source optical character recognition (OCR) engine [7]. HP originally was originally started it as a project [7]. Later it was modified, improved and taken over by Google and later released as open source in year 2005. It is now available at [8]" (Smith, 2007). It is very portable as compared to others and supports various platforms. Its focus is more towards providing less rejection and improved accuracy. Currently only command base version is available but there are many projects with UI built on top of it which could be forked. As of now Tesseract version 3.02 is released and available for use. Now Tesseract is developed and maintained by Google. It provides support for around 139 languages [7]. 2.2 Type Tesseract is an example based system. This makes it efficient and flexible. By example based systems we mean that the engine works on a set of example rules defined in the system and results depend on this data. So in simpler words to get good results we need to define these set of rules properly which is called "Training the engine". The reason to flexibility of Tesseract is the fact that we could always change or modify the rules depending on the requirements. 2.3 Architecture Tesseract OCR is an elegant engine with various layers. It works in step by step manner as shown in the block diagram in fig. 1. The first step in the cycle is to sense the color intensities of the image, named as adaptive thresholding [9], and converts the image into binary images. Second step is to do the connected component analysis [7] of the image, which does the task of extracting character outlines. This step is the main process of this cycle as it does the OCR of image with white text and black rest of the image [21]. Tesseract was probably the first [7] to use these cycles to process the input image. After this the outlines extracted from image are converted into Blobs(Binary Long Objects). It is then organized as lines and regions and further analysis is for some fixed area [7]. After extraction the extracted components are chopped into words and delimited with spaces. Recognition in text then starts which is a two pass process. As shown in fig 1, the first part is when attempt to recognize each word is made. Each satisfactory word is accepted and second pass is started to gather remaining words. This brings in the role of adaptive classifier. The adaptive classifier then will classify text in more accurate manner. The adaptive classifier needs to be trained beforehand to work accurately. When the classifier receives some data, it has to resolve the issues and assign the proper place of the text. More details regarding every step is available at [7][21]. Fig 1: Tesseract flow 2.4 Working of Tesseract Tesseract works pretty much as a scanner. Its interface is pretty simple as it takes input on the command lines with very basic commands. We need to input any image with text in it. The image is shown in fig.2 [10] for example. Then it is processed by Tesseract. The command to do that is shown in fig.3. The basic Tesseract command takes only two arguments: First is input image that contains text and second argument is output text file which is usually text file [10]. Tesseract by default picks the extension of output file as .txt. There no need to specify the explicitly the output file extension [21]. ![Image having text](image.png) Fig 2: Image having text[10] Tesseract supports various languages. Each language comes with a trained language data file. The language file must be kept in a location Tesseract knows. When using in the project it is advised to keep it within the project folder. This folder is Tesseract home folder in your machine. In this research, we are aiming to extract English and Hindi characters from the images so we have to keep both Hindi and English data files. After processing step is completed, the output file gets generated as shown in fig 3. In simple images with or without color (gray scale). Experiments show that Tesseract is capable of achieving high accuracy such as 95% but in the case of some complex images having multi layered backgrounds or fancy text, Tesseract provides better accuracy in results if the pictures are in the gray scale mode as instead of color. To prove this hypothesis, we ran Tesseract for same images in color and gray scale mode and in both cases different result were achieved [21]. Fig 3: Output of the OCR in text file[10] Chapter 3 - Previous Work 3.1 Literature review There were three main considerations in conducting a literature review for the topic “Improving the efficiency of Tesseract OCR engine”. The first consideration is that of current technology and limitations. There are a number of applications which are similar to the experiment being conducted. The second is the type of architecture being used in the current applications. The third consideration is the various types of preprocessing steps that could be added without affecting the throughput of the system. The literature review attempts to address these three issues, and show reasonable feasibility in this approach, both theoretically and experimentally. Here the main focus is to add a preprocessing which increases the accuracy and then build this system for Smart Phone platform which, in our case, would be Android. Various publications where gone through and ultimately gave a start to address all these issues to the extent that further research, and possibly actual experimentation, is warranted. 3.2 Current technology and limitations OCR is widely used in web, desktop and graphic applications. OCR based applications are most privately owned such as World Lens, Card Scan etc. There are much fewer applications specifically for mobile. There are a few open source frameworks for mobile OCR. Examples of private libraries **RICOH Library:** This library was introduced in late 90's. One of the best libraries of the time, this had a lot of applications such as Ricoh's own printer and scanner products, book scanner algorithms, online picture to document converters. The architecture of this library was ruled based machine translation. It had various rules and patterns defined at character level. A shortcoming of this system is that it work with perfectly lighted media and processing also takes a lot of time. **Kusachi et al (2004):** The system was developed in 2004 and incorporated hybrid translation system. By hybrid we mean that it is a combination of statistical and example based system. The shortcomings of this system are that it supports fewer languages. The system is not flexible enough to add more languages. Also it works on block letters. It does not support cursive letters at all. Again like the previous one it has slower processing. **Open Source Libraries** **Tesseract (Google):** The Tesseract was ranked one of the top three engines in 1995 OCR accuracy test conducted by University of Nevada, Las Vegas. The best thing about Tesseract is that it started back in 1995 and is ever improving. When it was taken over by Google it made a transition. It is probably one of the most accurate open source OCR engines available and it is growing ever since then. Tesseract works on Linux, Windows and Mac OSX. The source can also be compiled for other platforms, including Android and iPhone. It supports around 149 languages which come as different packages. Tesseract is an example based text detection system so whatever language we want to work with we need to either download that package or train the engine with our own training data. The benefits of using this engine are that it is flexible in terms of supporting different languages and could be compiled to run on different platforms. **GOCR:** This is also amongst the top three best rated open source OCR engines. GOCR's best feature is that unlike Tesseract it has graphical user interface that can be used with different frontends. This makes it very easy to work with different languages and architectures. It supports many different image file formats, and its quality has improved since then. **JAVAocr:** This is yet another OCR engine targeting mobile devices. Most of the features are same as Tesseract and GOCR but the one main feature that stands it apart is that it has very small memory footprint. Very few external dependencies makes it suitable for mobile development. It has a modular structure for easier development and deployment on system. It is built to run on cross platform applications. There are some disadvantages in this OCR engine. First, it's not fast in processing. Some of the portions work really well but others are lousy. Second, it's not very well documented and supported. The paper entitled, “Optical Character Recognition by Open Source OCR Tool Tesseract: A Case Study”[21], provided a good reference for the capabilities and applications of the Tesseract OCR. The authors have used gray scaling as a preprocessor to the Tesseract OCR engine. The most interesting thing about this paper was the subject matter and the test data. They presented overview of the subject and then the experiment they took out to OCR the car license plates with an added preprocessing. This article provided relevant, useful, and up-to-date information regarding Tesseract OCR and its use. This paper is a very good source, giving this paper a good comparative background and introduction to Tesseract. Then there is a comparison of the results using the new system they built with the normal Tesseract OCR. The applications based on these open source Frameworks are very few especially for Hindi. The below literature review gives an idea about various research works on Hindi and English OCR applications and various types of implementations. Some other papers[28],[29] and [30] that provide good insight on Hindi OCR are as the initial steps in the systems architecture. These research works for Hindi OCR apply to both desktop and mobile applications. Based on these studies we will lay out the framework for our application. They also serve as a benchmark for the result and conclusion of the effectiveness of my experiment. The setup for Hindi OCR within Tesseract is another important task. The way you train the engine and input the dataset matters a lot. Training Tesseract for Hindi includes four steps mainly and they are as follows: **Generating training images:** Tesseract official website explains these steps very well "The first step is to determine the full character set to be used, and prepare a text or word processor file containing a set of examples" [31]. The two important points to remember for a training file are: Firstly make sure the file contains all the characters that we are expecting. Secondly there should be at least 5 samples. There should be more samples of the more frequent characters - at least 20 [31]. Keeping all this in mind let's explore how do we create box files. **Make box files:** Box file is defined as a sample to match with the characters. We need a 'box' file for each training image. Tesseract official website defines the box file as "The box file is a text file that lists the characters in the training image, in order, one per line, with the coordinates of the bounding box around the image" [31]. Tesseract is not so intuitive about the sample data. Inconsistencies may lead to wrong interpretation of data. **Run Tesseract:** In this step we run the Tesseract engines with the new training files. The purpose is to create the trained dataset which works as a rule engine. Also it creates log files. **Compute character set:** Guidelines from Tesseract website state that "Tesseract needs to know the set of possible characters it can output. To generate the unichar set data file, use the unicharset_extractor program on the box files generated above" [31]: One more requirement in this is, Tesseract needs to have access to character properties i.e. isalpha, isdigit, isupper, islower, ispunctuation [31]. The system described in [30],[31] have classified Hindi OCR and its setup, training very nicely. The only limitation is that the system is for desktop. We have to build the similar system for mobile devices. 3.3 Various types of architectures The paper “An Automatic sign recognition and translation system” [22], provided an in-depth look at the various types of architectures that we could use. This presented the working of Tesseract on the Android system. The attributes analyzed in this system were the accuracy and efficiency of Tesseract OCR on a smart phone. The scope of my research is also towards smart phones so this is a good starting step for my experiment. The objective is to determine if Tesseract would be feasible to be used on smaller machines. **ATMA[30]:** This application uses the original OCR engine without any enhancement. The system is built on an Android device. Architecture is simple and is implemented in Android operating system. OCR Engine used is Tesseract 3.01 which is the latest stable version. The device is not mentioned but it is for sure a smart phone with Android. Other tools used are: Android NDK r7 and Android SDK r16 for compiling and building the android project. Capture mode used is "still camera". **Complete OCR for Hindi Text[28]:** This application also uses Tesseract OCR engine. This version is not specified though. Application is mainly for printed Hindi text and classifies Hindi characters very nicely. The system is built for Desktop computers. Capture mode is still camera. It uses segmentation of characters as a technique and post processing for error detection. **Shirorekha chopping[29]:** This application uses Tesseract 3.01 OCR engine. The target operating system mentioned is Ubuntu. Main feature described in this paper is chopping the image character by character. The system trained the Hindi data in one of the new font's. The system does a comparative study against Google's stock Hindi training data. Various pre-processing steps were applied. **TranslatAR[32]:** The device used in this system is Nokia n900. This paper demonstrates one of the newest architectures i.e. video augmentation capture mode. That means that this system deals with videos frames in the real time. Other features included are foreground-background color extraction technique. Looking at all these applications and their architecture, one thing that is realized is that when there are a lot of preprocessing steps a lot of time is spent before even the extraction begins. So preprocessing step needs to kept minimal to save time. On the architecture side Tesseract 3.01 is the most stable form , so that will be the target version of OCR engine. To keep the things simple, the system won't implement the OCR in video mode. The aim of this project is to extract the characters efficiently in least amount of time. ### 3.4 Types of preprocessing steps available The article “Transcription for the OpenPlaques project” [23], provides various types of preprocessing methods on the images. The article gave good insight on the computer graphics rendering and filtering. In this paper, individual methods are analyzed and their results identified and studied. The methodologies presented will be utilized to determine the feasibility of using this method as a pre processor in the solution. The article also presented advanced features such that text filtering using a sliding window on the target image [23]. **Steps available in Tesseract:** **Binarization:** Binarization of the image means to convert the image of up to 256 gray levels to a black and white image. Binarization is used mostly as a preprocessing step to image processing tools. This works by choosing a threshold value and classify all pixels above this and below this value. Then we just normalize the images to that threshold value. Now how to select the correct threshold is the main question. Due to this uncertainty, adaptive image binarization is commonly used because it adapts a threshold value depending on the image color levels. **Grayscale:** Grayscale means to transform the image into black and white. Usually colored images when converted to black and white reveal a lot of information. Most of the computers can normally represent up to 256 levels of gray color. Grayscale is a process which converts a continuous tone image to grayscale levels. The process is used in a lot of real-time image processing tools. Because of this most of the CCTV's and traffic light camera's are grayscale. It makes the separation of various parts of the images easier. We need to take care of the DPI along with the image's resolution. Different capturing devices capture images in different resolutions. This incurs a lot of inconsistency. This problem can be tackled effectively using data compression techniques, but still grayscale technique is bound to use a lot of memory [24]. **The need for further preprocessing** Current implementation on Tesseract works fine for desktop or laptop computers. But for mobile devices we need something lighter and efficient. The preprocessing steps prepare the input image to be almost ready for extraction by the Tesseract Engine. Also the preprocessing aims to remove noise, light variations etc which impair the task of recognition. Let's look at other algorithms available to achieve this. **More steps available are:** **Luminosity** Luminosity method is a technique which uses grayscale as the base. For converting an image into Luminosity we convert it into grayscale then preserve some light intensities. Luminosity is a almost like the grayscaling method, but more sophisticated to take the human perception of color into account [24]. As a matter of fact the human eye is functions bit differently than computer graphics. It is more sensitive to green and least sensitive to blue color intensities. So in the process we adjust these frequencies to be preserved. OpenCV OpenCV (Open Source Computer Vision) is a library of programming functions and algorithms [36]. Their main focus is to provide API mainly aimed at real-time computer vision. It was originally developed by Intel. OpenCV library is free for use for development (under the open source BSD license) [36]. The best feature is that the library is cross-platform. Linearization Linearization is basically adjusting the image from blurs and fuzzy edges. Sometimes due to motion while capturing the edges bleed into each other and gives fuzzy image. Linearization technique handles this very nicely and takes care edge by edge. Pixelation Used for individual character segmentation. Pixelation is defined as displaying the individual pixels of the digitized images. In this we display block for each pixel at a distance to each other which is apparent to the users. This can happen unintentionally too when a low-resolution image get stretched on large images. 3.5 Existing Applications on App Store World Lens: World is one of the leading applications available on the web for both Android and IOS smart phones. It is one of its kind and uses augmented reality to translate the captured content. Developed by Quest Visual, world lens uses built in camera of the phone to quickly scan and identify text and foreign languages. The processing is fast and utilizes video mode. So there is not actual saving of images in the phone memory. The words are displayed in context to the original text after translation.. World lens is available as trial version as well as paid version for the Apple's IOS as well as for a selection of Android smart phones. This is the best rated applications available in market. This is a close source applications which supports more than 40 languages. It extracts as well as translates the characters. Some of the cons for this application with respect to the proposed system are that it needs internet connection to work. The application is heavy in terms of processing because it is dealing in the video mode. Works best with high end devices. It also needs a lot of memory. **Mobile OCR:** This is a mobile application which makes possible obtaining text and working with it from pictures taken from a camera hardware. It has a simple and direct multilingual interface, which lets us access its features in a fast and effective way. The customized interface allows us to adapt the characteristics of the camera for an optimal image preprocessing that will improve the results obtained by the OCR. Also, the advanced text post processing techniques developed by the Smart Mobile Software increase the effectiveness until the achievement of almost perfect results. It is not an open source application. It is compatible with IOS and Android devices. The application supports many languages. It works for digital as well as printed text. The application is lighter than other competitors. One of the cons for this application with respect to the proposed system is that the app needs to download app data for each language. The processing of the overall system is slow. **Image to text OCR:** The ImageToText app is a free app which allows us to extract characters text from images, and share the results over the web. The app is fairly simple to use. Working is as simple as taking a picture, from the camera, of a document that you would like to OCR, and e-mail the image to yourself or share it. You will receive the image as well as the text file that contains the editable text that is extracted from the image. This currently supports English documents only. It is developed by Ricoh Innovations. It is a closed source application and is compatible with IOS and Android devices. One of the cons for this application with respect to the proposed system are that it has no support for languages apart from English. Also the accuracy is much lesser than the other applications. After getting enough ideas and instructions from the above presented research work we'll start implementing the proposed system using Tesseract OCR for Android devices. The subsequent sections include the implementation and working of the system. Chapter 4 - Implementation 4.1 Setup To conduct this experiment there are a few things that must be configured onto the system. The device used for this experiment is primarily Samsung Galaxy S3 running on Android 4.1 version. To install Tesseract onto the system we have to follow certain steps on the machine being used for development[8]. Installation steps include downloading the source from the Tesseract website(installer) [8]. Installer depends on the type of operating system being used. After that download Android SDK r19 and Android NDK r7c(open source) to open and build the code. We need Android NDK because of some native code in Tesseract. This code is basically the Leptonica image processing library which is written in C. Then we need to build the project using the following commands in the IDE: ``` cd <project-directory>/tess-two ndk-build android update project --path . ant release ``` The initial setup is the most crucial part. If it is not done properly one may keep getting weird errors during execution. Once this is done then the system could be installed on Android system. 4.1.1 Running Tesseract Running Tesseract on Android system is one of the biggest challenges experienced in the project. After successful installation we can verify whether the Tesseract OCR engine is working fine by calling the below steps [25]: Tesseract does not have any graphic user interface and works on command-line. We have to open a terminal window or command prompt window in the computer. A very simple example of the command is shown below [8] [25]: ``` tesseract image_name output [-l lang] [-psm page_seg_mode] [config_file...] ``` So in simpler words, a basic command to do OCR on an image named 'input_image1.png' and saving the extracted result to 'output_image1.txt' would be [25]: ``` tesseract input_image1.png output_image1 ``` Or to do the same thing with some other language say Hindi [25]: ``` tesseract myinput.png out -l hin ``` **Training Tesseract for Hindi** Steps to train Tesseract for a particular language were presented in Section 3.2. We would be following the same steps to train Tesseract again for Hindi character set in 4 fonts. It is very important for accurately extracting characters. Examples of the font files are as follows: <table> <thead> <tr> <th>Guḍākeśa 99</th> <th>ऋत्साहित्य नली नाम वीरसेनसुतो बली। उपयत्नो गुणैरिष्टयो नूत्वानम्बाबाबोरिविद्यें।</th> </tr> </thead> <tbody> <tr> <td>Śāntipur 99</td> <td>ऋत्साहित्य नली नाम वीरसेनसुतो बली। उपयत्नो गुणैरिष्टयो नूत्वानम्बाबाबोरिविद्यें।</td> </tr> </tbody> </table> Fig 4: Hindi fonts Example of the box file is as follows: 4.2 Architecture In the previous sections we did an extensive search on the existing application architectures. One thing common to all these applications is that to get better accuracy preprocessing is required. Also most of the application implement still camera capture mode to image input. Let's start laying out the overall architectural diagram of the proposed system. By now various pieces(Tesseract engine, preprocessing, Android migration) of the system are clear and we'll start putting them up in order. Overall Flow: The above image shows the overall architecture of the system. Basically it has two main subsystems. First is the Preprocessing and second is Tesseract API. The project's main focus area in this report is the preprocessing step which works on the input image to make it ready for Tesseract engine. One thing to note in this is that there is a tradeoff between processing and accuracy. The more time you spend on preprocessing gets more accuracy but it increases the runtime too. The figure below explains the relationship very nicely. **Processing vs. Accuracy** The image explains the distribution/weight for different steps in OCR with respect to processing and accuracy. On the processing side most of the time is spent in the extraction step. It is because how well the characters are extracted better it is for the recognition step. Then recognition and translation are almost equally weighted. On the accuracy part most of the accuracy is incurred in the recognition phase. It is on the part of rule matching subsystem to recognize and match characters efficiently. The above image explains the Preprocessing subsystem of the proposed application. The various steps in the pre-processing phase are: Rotation, optimize resolution, adjust the DPI's and lastly the Luminosity grayscaling algorithm is applied to the images. The processed image is then fed to Tesseract as input. Rotation step rotates the image if while capturing the camera was not in Zero degree angle. It's a relatively smaller step. Resolution optimization is also comparatively smaller step which compresses the larger images to best resolution for Tesseract. Most of the time is consumed in Dpi adjustment and grayscaling algorithm. They are the main steps which makes the image ready for Tesseract. Android App architecture: The image above explains the architecture from the Android point of view. The application flow is as follows: The first step is to choose the input image, which could be either clicked from the camera or selected from the image gallery. After selection, the image is fed to the preprocessing subsystem. The processed image is fed to Tesseract API which extracts the characters from the image and the result is displayed to the screen. 4.3 Tesseract Android Tools Tesseract Android tools is another project which helps in compiling the Tesseract and Leptonica libraries so that they could be used as an API, for example, on the Android platform. So it is structured as a service which exposes API calls to native code. The API is written in JAVA for works fine with Android projects. This step is very important to port the system built so far onto Android. The Tesseract library will behave as an API to the project's front end. We need to compile the project using Android tools to make it available as an Android project. 4.4 Preprocessing step Luminosity Technique Luminosity theshold grayscaling is a method for converting an image into grayscale but preserving some of the color intensities. The Luminosity technique is almost like the average color method, but more sophisticated to take the human perception of color into account [24]. The human eye is more sensitive to certain color. For instance it is more sensitive to green and least sensitive to blue. Below is the equation for getting the color of a pixel [24]: \[ \text{Intensity} = \frac{20 + 70 + 150}{3} \] Each pixel is converting using the above formula. Below is the illustration of converted images using both methods of gray scaling. Result images: Algorithm below defines how we process image luminosity technique: ```java // Get buffered image from input file; // iterate all the pixels in the image with width=w and height=h for (int w=0 to w=width) { For (int h=0 to h=height) { // call BufferedImage.getRGB() saves the color of the pixel // call Color(int) to grab the RGB value in pixel Color= new color(); int luminosity = (int)(0.2126 * red + 0.7152 *green + 0.0722 *blue); // now create new values Color lum = new ColorLum } } ``` In this algorithm we traverse the image pixel by pixel and then store it in an array. Then the formula for luminosity as shown below is applied. \[ luminosity = (0.2126 \times \text{red portion} + 0.7152 \times \text{green portion} + 0.0722 \times \text{blue portion}) \] This is the main step where we transform each pixel to get the real details out of it. Finally we again pack all the pixels to form the image again. ### 4.5 DPI Enhancement To get the best results out of the image we need to fix the DPI too. Grayscale alone would only work in the case when there is no distortion, light effects in the image [26]. We won't be experiencing such ideal images every time. Some steps we need to consider in DPI enhancement are: fix DPI (if needed) 300 DPI is minimum acceptable for Tesseract. Better range of DPI results in better extraction process. The reason why we need this step is we'll be using this application on different smart phones. Each one has different camera specification and pixel density. So it is better to normalize the picture before saving it in the gallery and make it consistent to the Tesseract engine. Since in our case the images are not perfectly clicked and ideal, and also we want to keep the process light and suitable for mobile phones, we'll keep the scope limited to DPI enhancement only. For our images we fix the DPI to 300 (needed by Tesseract) [26]. If the images are larger than this we compress the images to the size where we can achieve 300 dots per inches. The figure below describes the effect of DPI enhancement **Fig 12**: DPI enhancement The figure below shows how the algorithm works to refine the edges. **Fig 13**: edge enhancement Algorithm for DPI enhancement: ```c start edge extract (low, high){ // define edge Edge edge; // form image matrix Int imgx[3][3]= } Int imgy[3][3]= } Img height; Img width; //Get diff in dpi on X edge // get diff in dpi on y edge diffx= height* width; diffy=r_Height*r_Width; img magnitude= sizeof(int)* r_Height*r_Width); mempset(diffx, 0, sizeof(int)* r_Height*r_Width); mempset(diffy, 0, sizeof(int)* r_Height*r_Width); mempset(mag, 0, sizeof(int)* r_Height*r_Width); // this computes the angles // and magnitude in input img For ( int y=0 to y=height) For (int x=0 to x=width) Result_xside +=pixel*x[dy][dx]; Result_yside=pixel*y[dy][dx]; // return recreated image result=new Image(edge, r_Height, r_Width) return result; } ``` In a nutshell this algorithm works to normalize the pixel density. Each pixel has its size and depth of the image is denoted by number of pixels packed in it. As already discussed, the system can encounter image with lesser density than minimum required or a very large picture. 4.6 Combined picture Following are the screenshots from the working Android application. The system on which the project is running is Samsung Galaxy S3 and operating system is Android 4.1 jelly bean. Figure 14 shows the main activity which is the main landing page of the application. When we install and run the application this is the page that opens. It has two options to choose the image for OCR. First is the camera button which opens the camera and lets us click pictures. Second option is to choose existing image from the gallery. In this case the image could reside anywhere in the app memory or SD card. ![Fig 14: Main Activity](image1.png) ![Fig 15: Choose from gallery](image2.png) Figure 15 shows the popup that opens when we click the button "choose from gallery" and takes us to the phone's image gallery. Figure 16 shows the output text appears in the text box after the OCR. Figure 17 and 18 show the application data inside the app. This folder is created once the application is installed and stores the trained data files. Tesseract uses this location to process the input images and to compare the extracted characters against the trained data. As presented in the above images the fully functional Android application that was proposed in the beginning. In the subsequent section results of the experiment would be compared against other research works. Chapter 5 - Results 5.1 Experiment results Following are the results from the experiment conducted based on the data set from previous research works. Results were gathered separately on Hindi and English data. Comparison with Hindi OCR apps: **ATMA [30]:** Android Travel Mate Application is the application which runs Tesseract in its core just like our system and it extracts Hindi text too. So it the first research work to compare our system with. The data provided was a set of random images, logos and text in Hindi. The system was run several times for this data set. Following is the table showing the results of the experiment with this data set: <table> <thead> <tr> <th>Image no</th> <th>Image Type</th> <th>Accuracy</th> <th>Runtime</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3 words</td> <td>70</td> <td>1745</td> </tr> <tr> <td>2</td> <td>4 words</td> <td>70</td> <td>2304</td> </tr> <tr> <td>3</td> <td>2 words</td> <td>90</td> <td>1845</td> </tr> <tr> <td>4</td> <td>2 words</td> <td>100</td> <td>2372</td> </tr> <tr> <td>5</td> <td>2 words</td> <td>95</td> <td>1599</td> </tr> <tr> <td>6</td> <td>10 words</td> <td>50</td> <td>2478</td> </tr> <tr> <td>7</td> <td>2 words</td> <td>50</td> <td>1597</td> </tr> <tr> <td>8</td> <td>3 words</td> <td>30</td> <td>2000</td> </tr> <tr> <td>9</td> <td>8 words</td> <td>70</td> <td>3568</td> </tr> <tr> <td>10</td> <td>12 words</td> <td>75</td> <td>3448</td> </tr> <tr> <td>11</td> <td>3 words</td> <td>90</td> <td>1709</td> </tr> <tr> <td>12</td> <td>15 words</td> <td>90</td> <td>5629</td> </tr> <tr> <td>13</td> <td>13 words</td> <td>70</td> <td>6179</td> </tr> <tr> <td>14</td> <td>9 words</td> <td>85</td> <td>4723</td> </tr> <tr> <td>15</td> <td>8 words</td> <td>80</td> <td>2966</td> </tr> <tr> <td>16</td> <td>3 words</td> <td>90</td> <td>1172</td> </tr> <tr> <td>17</td> <td>2 words</td> <td>90</td> <td>1347</td> </tr> <tr> <td></td> <td><strong>Average</strong></td> <td><strong>101 words</strong></td> <td><strong>76.17%</strong></td> </tr> </tbody> </table> Table 1: result 1 Our experiment results (Table 1) shows that **Average time taken per word** is 0.459 sec. The original paper results are as follows. Let me now compare the results of both. <table> <thead> <tr> <th>Language</th> <th>English</th> <th>Hindi</th> </tr> </thead> <tbody> <tr> <td>Avg. Mean Confidence per word</td> <td>69</td> <td>24</td> </tr> <tr> <td>Avg. Time taken per word</td> <td>153 ms</td> <td>681 ms</td> </tr> <tr> <td>Light and Noise susceptibility</td> <td>LOW</td> <td>HIGH</td> </tr> <tr> <td>Average Accuracy</td> <td>97.9%</td> <td>79.2%</td> </tr> </tbody> </table> Fig 19: ATMA[30] results The experiment clearly shows percentage decrease in runtime than the original paper i.e. the time taken to extracts the words. The average time taken per word decreased from 681 ms to 459 ms. For few images such as image no 6, 7 and 8 (highlighted in gray color), the system showed very less accuracy percentage. The reasons for getting less accuracy were found out to be: special symbols overlapping with character boundaries and light distortions. The new system keeps the accuracy almost the same as the original paper results. **Shirorekha Chopping Integrated Tesseract OCR[29]:** The experiment was conducted against the data set from the above research paper[29]. The data set consisted of images with large no of Hindi characters. The results from the experiment are as follows: <table> <thead> <tr> <th>Image no</th> <th>Image Type</th> <th>Accuracy</th> <th>Runtime</th> </tr> </thead> <tbody> <tr> <td>1.</td> <td>21</td> <td>90</td> <td>1079ms</td> </tr> <tr> <td>2.</td> <td>29</td> <td>91</td> <td>1100ms</td> </tr> <tr> <td>3.</td> <td>20</td> <td>92</td> <td>1054ms</td> </tr> <tr> <td>4.</td> <td>24</td> <td>89</td> <td>1066ms</td> </tr> </tbody> </table> Table 2: result 2 The results from the experiment depict that the Average time to process each image is 1074 ms. Average accuracy was measured to be about 90.5%. **Original Paper results are as follows:** <table> <thead> <tr> <th></th> <th>Processing Time</th> <th>Total Characters in Test Image</th> </tr> </thead> <tbody> <tr> <td>Google’s hin.traineddata</td> <td>2000 ms</td> <td>94</td> </tr> <tr> <td>Parichit’s hin.traineddata</td> <td>1500 ms</td> <td>94</td> </tr> <tr> <td>Proposed hin.traineddata</td> <td>1000 ms</td> <td>94</td> </tr> </tbody> </table> Fig 20: Shirorekha results[29] The results from both the experiments are nearly the same. The results in the figure were conducted on Desktop machine whereas the new system runs the experiment on Mobile phone while keeping the runtime and accuracy same on lower power machine. **Comparison with English OCR** Let us know compare the results for English character recognition. The new system was ran against data set from recent research works on English OCR as well. The experiment was conducted against the data set of the research paper[21]. <table> <thead> <tr> <th>Image no</th> <th>Image Type</th> <th>Accuracy</th> <th>Runtime</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>5 WORDS</td> <td>100</td> <td>2202</td> </tr> <tr> <td>2</td> <td>6</td> <td>97</td> <td>1455</td> </tr> <tr> <td>3</td> <td>3</td> <td>100</td> <td>1127</td> </tr> <tr> <td>4</td> <td>3</td> <td>100</td> <td>1539</td> </tr> <tr> <td>5</td> <td>3</td> <td>100</td> <td>1247</td> </tr> <tr> <td>6</td> <td>4</td> <td>98</td> <td>3674</td> </tr> <tr> <td>7</td> <td>1</td> <td>100</td> <td>1165</td> </tr> <tr> <td>8</td> <td>7</td> <td>95</td> <td>1752</td> </tr> <tr> <td>9</td> <td>7</td> <td>100</td> <td>1281</td> </tr> </tbody> </table> The results from the experiment show that Average time taken per word is 0.36 sec per word and Average accuracy is about 98.8 %. The results from the original paper are as follows: Table 3: result 3 <table> <thead> <tr> <th></th> <th>39 words</th> <th>98.8%</th> <th>15 sec</th> </tr> </thead> </table> The results clearly show the increase in efficiency in the results from the new system. 5.2 Conclusion The experiment results show a significant increase in efficiency specially in Hindi character recognition. Keeping the efficiency same, the system is able to process the image in lesser time. Original paper[30] has average runtime of 681 ms and the experiment results show the runtime of 459 ms keeping the accuracy almost the same. The consistency of system for English OCR is also the same. We are able to demonstrate significant increase in efficiency and of English OCR vs. Hindi OCR. <table> <thead> <tr> <th>Original Paper</th> <th>Our Experiment</th> </tr> </thead> <tbody> <tr> <td><strong>Ratio of Eng to Hindi runtime:</strong> 153ms: 681ms</td> <td><strong>Ratio of Eng to Hindi runtime:</strong> 360ms: 459ms</td> </tr> </tbody> </table> Table 4: Conclusion Hindi to English recognition rate in paper[30] is about 153 ms: 681 ms. The results from new system show the decrease in this ratio. This shows that we are able to improve character recognition for Hindi language on the whole by improving the process for recognition and that too on mobile device. 5.3 Examples Input Images for the experiment: ![Fig 22: test image](image1) ![Fig 23: test image](image2) ![Fig 24: test image](image3) Fig 25: test image Output Images: Fig 26: output image 5.4 Future Work and Scope OCR is a very useful and popular application. It is being used in a lot of domains currently. The idea of improving and improvising the OCR for Hindi text is a very different aspect. India is a country with more than a billion people. Hindi being the national language is the used and required everywhere. Hindi OCR and translation application for mobile phones solves a greater piece of the problem stated in the beginning of the report. The experiment could be used to build translation apps for Hindi language. This in turn could be used in various departments such as educational institutions, transport, research and development. References [33] TECH CRUCH. http://techcrunch.com/2012/08/11/analysis-web-3-0-the-mobile-era/ [34] IBM. www.haifa.il.ibm.com/projects/image/glt/binar.html
{"Source-Url": "https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?article=1416&context=etd_projects", "len_cl100k_base": 12131, "olmocr-version": "0.1.42", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 95196, "total-output-tokens": 15915, "length": "2e13", "weborganizer": {"__label__adult": 0.0005521774291992188, "__label__art_design": 0.002140045166015625, "__label__crime_law": 0.00046896934509277344, "__label__education_jobs": 0.002361297607421875, "__label__entertainment": 0.00027108192443847656, "__label__fashion_beauty": 0.00035190582275390625, "__label__finance_business": 0.0003097057342529297, "__label__food_dining": 0.00046324729919433594, "__label__games": 0.00115966796875, "__label__hardware": 0.0052032470703125, "__label__health": 0.0008683204650878906, "__label__history": 0.00070953369140625, "__label__home_hobbies": 0.00014293193817138672, "__label__industrial": 0.0005230903625488281, "__label__literature": 0.0007839202880859375, "__label__politics": 0.00032901763916015625, "__label__religion": 0.0007901191711425781, "__label__science_tech": 0.2939453125, "__label__social_life": 0.00015723705291748047, "__label__software": 0.0194549560546875, "__label__software_dev": 0.66748046875, "__label__sports_fitness": 0.0003476142883300781, "__label__transportation": 0.0009021759033203124, "__label__travel": 0.00025081634521484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62542, 0.04007]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62542, 0.47834]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62542, 0.88657]], "google_gemma-3-12b-it_contains_pii": [[0, 720, false], [720, 986, null], [986, 986, null], [986, 1437, null], [1437, 2732, null], [2732, 2961, null], [2961, 6018, null], [6018, 9024, null], [9024, 9539, null], [9539, 11789, null], [11789, 14568, null], [14568, 15471, null], [15471, 17553, null], [17553, 18345, null], [18345, 19612, null], [19612, 20077, null], [20077, 21942, null], [21942, 23669, null], [23669, 25600, null], [25600, 27497, null], [27497, 29311, null], [29311, 31170, null], [31170, 33238, null], [33238, 35081, null], [35081, 37211, null], [37211, 37770, null], [37770, 39235, null], [39235, 40510, null], [40510, 41041, null], [41041, 42114, null], [42114, 42817, null], [42817, 43279, null], [43279, 44573, null], [44573, 45126, null], [45126, 46635, null], [46635, 46820, null], [46820, 47927, null], [47927, 49020, null], [49020, 49099, null], [49099, 49309, null], [49309, 50955, null], [50955, 52539, null], [52539, 54189, null], [54189, 55276, null], [55276, 55715, null], [55715, 55772, null], [55772, 56434, null], [56434, 57982, null], [57982, 59663, null], [59663, 61224, null], [61224, 62542, null]], "google_gemma-3-12b-it_is_public_document": [[0, 720, true], [720, 986, null], [986, 986, null], [986, 1437, null], [1437, 2732, null], [2732, 2961, null], [2961, 6018, null], [6018, 9024, null], [9024, 9539, null], [9539, 11789, null], [11789, 14568, null], [14568, 15471, null], [15471, 17553, null], [17553, 18345, null], [18345, 19612, null], [19612, 20077, null], [20077, 21942, null], [21942, 23669, null], [23669, 25600, null], [25600, 27497, null], [27497, 29311, null], [29311, 31170, null], [31170, 33238, null], [33238, 35081, null], [35081, 37211, null], [37211, 37770, null], [37770, 39235, null], [39235, 40510, null], [40510, 41041, null], [41041, 42114, null], [42114, 42817, null], [42817, 43279, null], [43279, 44573, null], [44573, 45126, null], [45126, 46635, null], [46635, 46820, null], [46820, 47927, null], [47927, 49020, null], [49020, 49099, null], [49099, 49309, null], [49309, 50955, null], [50955, 52539, null], [52539, 54189, null], [54189, 55276, null], [55276, 55715, null], [55715, 55772, null], [55772, 56434, null], [56434, 57982, null], [57982, 59663, null], [59663, 61224, null], [61224, 62542, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62542, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62542, null]], "pdf_page_numbers": [[0, 720, 1], [720, 986, 2], [986, 986, 3], [986, 1437, 4], [1437, 2732, 5], [2732, 2961, 6], [2961, 6018, 7], [6018, 9024, 8], [9024, 9539, 9], [9539, 11789, 10], [11789, 14568, 11], [14568, 15471, 12], [15471, 17553, 13], [17553, 18345, 14], [18345, 19612, 15], [19612, 20077, 16], [20077, 21942, 17], [21942, 23669, 18], [23669, 25600, 19], [25600, 27497, 20], [27497, 29311, 21], [29311, 31170, 22], [31170, 33238, 23], [33238, 35081, 24], [35081, 37211, 25], [37211, 37770, 26], [37770, 39235, 27], [39235, 40510, 28], [40510, 41041, 29], [41041, 42114, 30], [42114, 42817, 31], [42817, 43279, 32], [43279, 44573, 33], [44573, 45126, 34], [45126, 46635, 35], [46635, 46820, 36], [46820, 47927, 37], [47927, 49020, 38], [49020, 49099, 39], [49099, 49309, 40], [49309, 50955, 41], [50955, 52539, 42], [52539, 54189, 43], [54189, 55276, 44], [55276, 55715, 45], [55715, 55772, 46], [55772, 56434, 47], [56434, 57982, 48], [57982, 59663, 49], [59663, 61224, 50], [61224, 62542, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62542, 0.12174]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
935895d404c397c86ada62388c1f85423671aaf4
Generating Embedded Software From Hierarchical Hybrid Models Rajeev Alur *University of Pennsylvania*, alur@cis.upenn.edu Franjo Ivancic *University of Pennsylvania* Jesung Kim *University of Pennsylvania* Insup Lee *University of Pennsylvania*, lee@cis.upenn.edu Oleg Sokolsky *University of Pennsylvania*, sokolsky@cis.upenn.edu Follow this and additional works at: https://repository.upenn.edu/cis_papers **Recommended Citation** This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_papers/68 For more information, please contact repository@pobox.upenn.edu. Generating Embedded Software From Hierarchical Hybrid Models Abstract Benefits of high-level modeling and analysis are significantly enhanced if code can be generated automatically from a model such that the correspondence between the model and the code is precisely understood. For embedded control software, hybrid systems is an appropriate modeling paradigm because it can be used to specify continuous dynamics as well as discrete switching between modes. Establishing a formal relationship between the mathematical semantics of a hybrid model and the actual executions of the corresponding code is particularly challenging due to sampling and switching errors. In this paper, we describe an approach to compile the modeling language CHARON that allows hierarchical specifications of interacting hybrid systems. We show how to exploit the semantics of CHARON to generate code from a model in a modular fashion, and identify sufficient conditions on the model that guarantee the absence of switching errors in the compiled code. The approach is illustrated by compiling a model for coordinated motion of legs for walking onto Sony's AIBO robot. Keywords Hybrid system, embedded software, formal language, code generation, modularity ABSTRACT Benefits of high-level modeling and analysis are significantly enhanced if code can be generated automatically from a model such that the correspondence between the model and the code is precisely understood. For embedded control software, hybrid systems is an appropriate modeling paradigm because it can be used to specify continuous dynamics as well as discrete switching between modes. Establishing a formal relationship between the mathematical semantics of a hybrid model and the actual executions of the corresponding code is particularly challenging due to sampling and switching errors. In this paper, we describe an approach to compile the modeling language Charon that allows hierarchical specifications of interacting hybrid systems. We show how to exploit the semantics of Charon to generate code from a model in a modular fashion, and identify sufficient conditions on the model that guarantee the absence of switching errors in the compiled code. The approach is illustrated by compiling a model for coordinated motion of legs for walking onto Sony’s AIBO robot. Categories and Subject Descriptors D.2.2 [Software]: Software Engineering—Design Tools and Techniques General Terms Languages Keywords Hybrid system, embedded software, formal language, code generation, modularity 1. INTRODUCTION An embedded system typically consists of a collection of digital programs that interact with each other and with an analog environment. As computing tasks performed by embedded devices become more sophisticated, the need for a sound discipline for writing embedded software becomes more apparent (c.f. [20, 25]). Model-based design paradigm, with its promise for greater design automation and formal guarantees of reliability, is particularly attractive in this domain. Contemporary industrial control design already relies heavily on tools for mathematical modeling and simulation. Even though many such tools support automatic code generation from the model (for example, Simulink [30]), the emphasis has been performance-related optimizations, and many issues relevant to correctness are not satisfactorily addressed. First, the precise relationship between the model and the generated code is rarely specified or formalized. Second, the continuous blocks are either ignored, or discretized before code generation. Finally, code generation typically means generation of tasks, and does not incorporate scheduling. Consequently, the correspondence between the model and the code is lost, and analysis results established for the model are not meaningful for the code. The desire to bridge this gap motivates our research. Traditionally, control theory and related engineering disciplines have addressed the problem of designing robust control laws to ensure optimal performance of systems with continuous dynamics. For example, given system dynamics \( \dot{x} = f(x, u) \), where \( x \) represents the system state and \( u \) represents the control input, one can design a control law \( u = g(x) \) with respect to the given specification (c.f. [6, 11]). To implement this control law, one must first determine the sampling period \( \Delta \). The software, then, is a task that consists of “sense \( x \); compute \( u = g(x) \); send \( u \) to actuators,” which must be scheduled every \( \Delta \) time units. Compared to the mathematical model \( \dot{x} = f(x, u) \), the behavior of the generated code may be different for many reasons: the system state \( x \) may not follow the model \( \dot{x} = f(x, u) \) precisely, sampling introduces discretization errors, and Typical controllers, however, are rarely purely continuous. Discreteness arises due to a variety of reasons such as communication, concurrency, and multiple modes of operation. An appropriate mathematical model is, then, a hybrid system. A hybrid system combines the traditional state-machine-based model of discrete control with continuous models of differential and algebraic equations [1, 27]. Hybrid systems is the focus of increasing research in control theory as well as in formal modeling and verification in recent years (c.f.[5, 20]). Consider a system with two modes. Initially the system is in the mode $M_1$ with dynamics $\dot{x} = f_1(x, u)$. It can stay in the mode $M_1$ as long as the invariant $a(x)$ holds, and switches to the mode $M_2$ if the condition $p(x)$ holds. The dynamics is $\dot{x} = f_2(x, u)$ in the mode $M_2$. Suppose we design the controllers $u = g_1(x)$ and $u = g_2(x)$ for the two modes separately. The software corresponding to this controller samples the system state $x$ every $\Delta$ time units. It has a mode variable which is initially $M_1$, and it is updated to $M_2$ if $p(x)$ evaluates to true. The control output $u$ is computed by evaluating either $g_1(x)$ or $g_2(x)$ based on the value of the mode variable. In terms of discrepancy between the high-level model and the code, in addition to errors in implementing continuous controllers in the individual modes, now there can be errors in switching from the mode $M_1$ to $M_2$ which can cause significant problems. There is no general theory of approximation and robustness of controllers in presence of switching. If a switch is missed, the resulting trajectory can be entirely different. Detecting switching events as accurately as possible has been a topic of research for simulation of hybrid systems (c.f.[16]), but such techniques cannot be implemented in real-time. In this paper, we initiate the study of formulating and limiting discrepancies between the model and the generated code for hybrid systems. We show that if the invariant $a(x)$ of a mode and the guard $p(x)$ of a switch out of this mode overlap for a duration greater than the sampling period, then the code will not miss the switching event. Such a condition can be checked statically, at least for systems with linear dynamics. It is worth noting that this requirement implies that the model is inherently non-deterministic: semantically, the switch may happen at any time in the duration for which the invariant and the guard overlap. This is in contrast with the hypothesis that modeling languages for reactive systems should have deterministic reactions to external inputs to be implementable (c.f. [8, 25]). The second focus of this paper is generating the code in a modular fashion. Each mode is compiled as a C++ class, and the discrete and continuous update methods for a mode simply call the corresponding methods for sub-modes in a hierarchical manner. As a case study, we have developed a compiler from CHARON to Sony’s AIBO robots [15]. The code generation, of course, has to address all the details in mapping the logical constructs. The specific model described in this paper corresponds to coordinating the four legs to make the robot walk. The individual leg has four modes and the switching conditions demonstrate the flexibility offered by high-level modeling in mixing time-triggered and event-triggered switching; some switches are triggered by updates of discrete variables by other agents, some are triggered by elapse of time for a specified duration, and some are triggered by conditions on continuous variables. This example also shows benefits of model- there may be numerical errors in computing $g$. However, the discrepancy can be bounded if we assume that the system state follows the model closely, there is a bound on numerical errors, and the control law is robust. Our ideas are demonstrated in the context of the modeling language CHARON, a design environment for specification and analysis of embedded systems [2]. In CHARON, the building block for describing the system architecture is an agent that communicates with its environment via shared variables. The language supports the operations of composition of agents to model concurrency, hiding of variables to restrict sharing of information, and instantiation of agents to support reuse. The building block for describing flow of control inside an atomic agent is a mode. A mode is basically a hierarchical state machine, that is, a mode can have sub-modes and transitions connecting them. Variables can be declared locally inside any mode with the standard scoping rules for visibility. Modes can be connected to each other only via well-defined entry and exit points. We allow sharing of modes so that the same mode definition can be instantiated in multiple contexts. Discrete updates in CHARON are specified by guarded actions labeling transitions connecting the modes. Some of the variables in CHARON can be declared analog, and they flow continuously during continuous updates that model passage of time. The evolution of analog variables can be constrained in three ways: differential constraints, algebraic constraints, and invariants which limit the allowed durations of flows. CHARON supports compositional trace semantics for both modes and agents [4]. For analysis it supports simulation, and formal verification of safety properties for a restricted subset, namely, models with finite discrete state and linear continuous dynamics in every mode [2, 3]. We exploit the hierarchical semantics of CHARON to generate the code in a modular fashion. Each mode is compiled as a C++ class, and the discrete and continuous update methods for a mode simply call the corresponding methods for sub-modes in a hierarchical manner. As a case study, we have developed a compiler from CHARON to Sony’s AIBO robots [15]. The code generation, of course, has to address all the details in mapping the logical constructs. The specific model described in this paper corresponds to coordinating the four legs to make the robot walk. The individual leg has four modes and the switching conditions demonstrate the flexibility offered by high-level modeling in mixing time-triggered and event-triggered switching; some switches are triggered by updates of discrete variables by other agents, some are triggered by elapse of time for a specified duration, and some are triggered by conditions on continuous variables. This example also shows benefits of model- ing: the Charon model is simple and small, hides all the messy platform-dependent details of programming AIBO, and can be subjected to simulation and reachability analysis to prove formally safety properties. **Related work.** Commercial modeling tools such as RationalRose and Simulink support code generation, but the discrepancy between the model and code is not formally addressed. Code generation from synchronous languages for reactive systems, such as Statecharts [18], Esterel [8], and Lustre [17], is significantly more rigorous. However, these languages do not support specification of continuous activities. Issues such as efficiency of the generated code and dealing with logical concurrency and communication are addressed, but they do not exploit sequential hierarchy for modular compilation. Shift is a language for dynamic networks of hybrid automata [12], and it supports code generation, but the focus is not on modularity and correctness issues. A complementary project is the time-triggered language Giotto that allows describing switching among task sets so that timing deadlines can be specified in a platform independent manner separately from the control code [21, 22]. This concern is orthogonal, and in fact, Charon can be compiled into Giotto. Model-based development of embedded systems is also promoted by other projects with orthogonal concerns: Ptolemy supports integration of heterogeneous models of computation [14] and GME supports integration of multiple views of the system [23]. 2. **MODELING LANGUAGE** We introduce the formal modeling language Charon, illustrating it with an example, and give the intuition for its semantics. More details can be found in [4]. To enhance presentation, we use a pictorial view of the language constructs. The language supports both visual and textual representations. Throughout the paper we use a case study to illustrate the modeling concepts of the language, the salient aspects of our code generation approach, and how they relate to each other. The case study models the walking process of a four-legged robot and uses this model to generate code for the robot dog AIBO, manufactured by Sony. The controller for walking is based on the description in [19]. The conceptual model for a leg is shown in Figure 1. The controller assumes that each leg has the hip and the knee joints that can be controlled by giving the desired angular position of the joint (i.e., angles \( j_1 \) and \( j_2 \) in Figure 1). The control objective for each leg is to ensure that the leg moves in such a way that the paw (i.e., the end of the knee joint) follows the shown trajectory. The global control objective is to ensure that only one leg is up in the air at any moment and that the center of mass for the robot is within the triangle given by the three legs on the ground. 2.1 **Agents and Architectural Hierarchy** An agent represents an autonomous entity that operates by communicating with other agents via shared variables. We distinguish between atomic and composite agents. A composite agent \((SA, V)\) consists of a set of variables \( V \) and a non-empty set of sub-agents \( SA \). An atomic agent $\langle M, V \rangle$ does not have any sub-agents and its behavior is given by a mode $M$, as described in the next section. Figure 2(a) shows the architecture of the model represented as the top-level agent $\text{Dog}$. It contains five concurrent sub-agents representing the legs and the brain of the dog. The brain agent serves as the controller for the leg agents. Note that the leg agents are instances of the same agent $\text{Leg}$, which we describe below. The variables of an agent are partitioned into private, input, and output variables. Each agent has a well-defined interface which consists of its typed input and output variables, represented visually as blank and filled squares, respectively. Connections between variables represent data flows between the agents in the model. The $\text{Brain}$ agent reads variables $x$ and $y$, representing leg positions, from the $\text{Leg}$ agents, renaming them appropriately. That is, the variable $x$ of agent $\text{LegRF}$ is renamed to $xRF$ in the agent $\text{Brain}$, and so on. The agent $\text{Brain}$ provides the desired speed of the dog represented by the variable $v$, which is read by the $\text{Leg}$ agents. The variable $\text{token}$, shared among all agents and indicates which leg is currently in the air, can be modified by each of them. All of these variables, however, are internal to the $\text{Dog}$ agent. The interface variables of the $\text{Dog}$ agent are eight output variables that represent commands sent to the joint motors in each leg, and four input variables that represent ground contact sensors in each leg. The agent $\text{Leg}$ is atomic. Its interface contains the position variables $x$ and $y$, the joint commands $j_1$ and $j_2$, the $\text{token}$ variable, and two input variables, $v$ provided by the $\text{Brain}$ agent, and $\text{ground}$ provided by the external sensor. ### 2.2 Modes and Behavioral Hierarchy Modes represent behavioral hierarchy in the system design. Each mode describes a continuous behavior and a single thread of discrete control. A mode can be active or inactive during an execution, depending on whether the discrete control resides within the mode or not. Formally, a mode $M$ is a tuple $(E, X, V, SM, Cons, T)$, where $E$ is a set of entry control points, $X$ is a set of exit control points, $V$ is a set of variables, $SM$ is a set of sub-modes, $Cons$ is a set of constraints, and $T$ is a set of transitions. Each mode has a well-defined data interface consisting of typed global variables used for sharing state information, and also a well-defined control interface consisting of entry and exit points, through which discrete control enters and exits the mode. A top-level mode, which is activated at the start of an execution and is never deactivated, has a special entry point $\text{init}$. Each mode has a default, unnamed entry point and a default exit point. The set $SM$ can contain a number of sub-modes connected by transitions from the set $T$. We distinguish between entry transitions, leading from an entry point of $M$ to an entry point of a sub-mode of $M$, exit transitions, leading from an exit point of a sub-mode to an exit point of $M$, and internal transitions, connecting an exit point of a sub-mode to an entry point of another sub-mode. Each transition has a guard and an action, and is used to transfer discrete control from one sub-mode to another. During an execution, transitions occur instantaneously and can be taken when its guard is satisfied. When the transition is taken, an associated action is executed, assigning new values to the variables of the mode. In addition to discrete steps, the variables of the mode continuously evolve with the passage of time according to the set of constraints $Cons$. A mode can contain three kinds of constraints. Continuous trajectories of a variable $x$ can be given by either an algebraic constraint $A_x$, which defines the set of admissible values for $x$ in terms of values of other variables, or by a differential constraint $D_x$, which defines the admissible values for the first derivative of $x$ with respect to time. Additionally, $Cons$ can contain an invariant $I$, which is a boolean predicate over the mode variables. Only those trajectories are allowed that continuously satisfy the invariant of the mode. We represent modes visually as state machines with transitions between them. Transitions are labeled by guards and actions. To make it easier to visually distinguish between guards and actions, actions are boxed. Entry and exit points are denoted as blank and filled circles, respectively. Transitions incident to a default entry or exit point, which are not shown on the picture, are visually attached directly to the box representing the mode. The mode $\text{LegMode}$, the top-level mode of the agent $\text{Leg}$, is shown in Figure 2(b). Invariants as well as the complicated expression for the guard $g\text{stop}$, are omitted to avoid cluttering the picture. The mode contains five sub-modes. The sub-mode $\text{GetUp}$ is entered during initialization and ensures that the dog is standing before walking begins. It has its own internal structure, which we do not discuss here. The other four modes correspond to the four segments of the leg trajectory in Figure 1. Note that the two sub-modes that move the leg up and down are instances of the same mode with different parameter values. To ensure stability of the robot, only one leg can be in the air at any time. We use the shared variable $\text{token}$ to switch legs. A leg can lift off the ground only if the token is equal to its number (given as the mode parameter $\text{MTOKEN}$). The leg then moves diagonally upwards until the desired height is reached, and the mode is switched to begin horizontal movement. When the leg is moved forward enough, another mode switch happens and the leg is moved diagonally down. When the leg reaches ground, a signal from the paw sensor sets the variable $\text{ground}$, the mode switch occurs and the token is passed to the next leg by the action of the transition. At the lowest level of the behavioral hierarchy are atomic modes. They describe purely continuous behaviors. For example, Figure 3 illustrates the behavior prescribed by the mode $\text{UpDown}$, which specifies the desired trajectory for the paw moving diagonally up or down by means of a differential constraint that asserts the relationship between the horizontal and vertical velocities of the paw, represented as the first time derivatives of the paw coordinates $x$ and $y$, and the input variable $v$, representing the desired speed. The trajectory is also constrained by the invariant specifying a range of valid vertical positions. mode UpDown(real dir) { read real v; write real x, y; diff { d(x) == 3*v; d(y) == dir*3*v; } inv { y ≥ y\_limit; y ≤ y\_upper\_limit; } } Figure 3: An atomic mode 2.3 Semantics The CHARON language has modular trace-based formal semantics. That is, the semantics prescribes how to construct the set of executions of an agent or mode based on its sub-agents (or sub-modes) and constraints within the mode. Instead of presenting the formal semantics for modes and agents, which can be found in [4], we give an informal description of an admissible execution here. Later, in Section 4, we present a simulator that agrees with the semantics at certain discrete points in time, and constructs the execution in a precise (albeit non-modular) fashion. An execution of a mode $M$ is constructed as follows. The state of a mode includes the values of the mode variables and, in a non-atomic mode, an additional variable that records the currently active sub-mode. A mode becomes active when the control is transferred to one of its exit points. The mode can remain active as long as its invariant is satisfied. As soon as the invariant is violated, time cannot progress any further, and the mode is forced to transfer control to one of its exit points via an exit transition. If the invariant is satisfied, the mode can take a continuous step, during which time progresses, and the state of the mode is continuously updated according to the differential and algebraic constraints of the mode and its active sub-mode. Discrete control is not affected by a continuous step. Alternatively, if a mode has an enabled transition $t$, it can execute a discrete step, during which time does not progress and $t$ is executed. Mode variables are updated according to the action of $t$ and, if the target of $t$ is an exit point of a sub-mode $m$, $m$ becomes the active sub-mode. A transition $t$ is enabled if the guard of $t$ is satisfied and the control has been transferred to the control point that is the source of $t$. Consider the example in Figure 2(b). The transition in the mode Leg from the sub-mode GetUp to the sub-mode Walk can be taken whenever GetUp completes its execution by transferring control to its exit point ex. By contrast, when the leg touches the ground and the variable ground is set, the transition interrupts the execution of the sub-mode UpDown(1) and transfer the control to OnGround. An execution of an agent is constructed by either taking a discrete step in one of its sub-agents or by taking a continuous step in all sub-agents simultaneously. The execution stops if time cannot be advanced (that is, one of the modes has a violated invariant) and no mode has an enabled transition. In this case, the model is deadlock-free. If a model is deadlock-free, then, whenever an invariant is violated, there is an enabled transition in one of the modes, we call the model a non-blocking hybrid system. 3. CODE GENERATION In this section, we present the code generator that compiles CHARON models for the target platform. The process can be decomposed into two phases as shown in Figure 4. The front-end transforms the CHARON model into a high-level language representation. One of the main differences between CHARON models and high-level language programs is that in the former the state is defined in the continuous-time domain whereas in the latter the state changes in a discrete fashion. We approximate the continuous behavior by updating the state of the continuous model periodically every $\Delta$ time units. Obviously, we lose certain properties of the model due to approximation, but we can guarantee that transitions are not missed if the period $\Delta$ is small enough. We will come back to this issue later in Section 4. The code generated by the front-end is platform-independent and needs to be ported to the execution environment of a specific target platform. The back-end performs platform-specific adaptation of the code to bind abstract objects of the model to concrete objects of the platform. The resulting code can be compiled into a platform-specific binary form using a target compiler, as we will explain in Section 3.2. Faithful implementation of the CHARON semantics in a code generation algorithm is complicated by the fact that, conceptually, executions of agents proceed concurrently, while on a single-processor platform they will, by necessity, be executed sequentially. Therefore, we have to ensure that the order of evaluation of the agents is consistent with the dependencies among variables in the model. For example, if an algebraic constraint for a variable $x$ in a mode contains the variable $y$ in its right-hand side, the constraint updating $y$, possibly in a mode of another agent, must be processed before the constraint for $x$. The same applies for the evaluation of guards: before the guard of a transition is evaluated, we have to ensure that all the variables it uses have been updated. These dependencies can change dynamically as the execution moves from mode to mode, and hence, dependencies will have to be updated with each mode switch. To make manipulation of dependencies easier, we assume that there are no cyclic dependencies in any state of the system during its execution. 3.1 Front-end The role of the front-end is to parse the given CHARON model into an abstract syntax tree and map each node of the tree into an object of the target programming language. We chose C++ as an intermediate target language, mainly because the object-oriented features of the language best suit CHARON and make the code generation process simpler, and also because the language has been deployed in many real systems, including AIBO. Modularity of the original model is captured by aggregating objects belonging to the same mode in a C++ class that can be compiled separately. The C++ class consists of methods implementing equations and transitions, pointers to the external variables and the sub-modes. The code generator produces a C++ class for a given abstract syntax tree of a mode \( M = (E, X, V, SM, Cons, T) \) consisting of entry points \( E \), exit points \( X \), variables \( V \), sub-modes \( SM \), constraints \( Cons \), and transitions \( T \), as described in Algorithm 1 in Appendix. The algorithm makes a recursive call for each sub-mode \( m \in SM \) to generate a corresponding separate class. Note that the algorithm does not reference any elements of upper-level modes or any elements of sub-modes except for the sub-mode interface. This implies that the generated code is modular and can be compiled and executed independent of other modes. In the following, we describe the algorithm in more detail for each element of a mode. To simplify the algorithm description, we assume a utility function \( \text{GenStmt}() \) that produces a syntactically correct C++ statement from given inputs. **Variable.** Variables in CHARON are either local or global. Each local variable \( v \in V \) is translated into a variable class instance, while each global variable \( v \in V \setminus \{ V \} \) is translated into a reference to a variable class instance that is instantiated at an upper-level mode where the same variable is declared as a local variable. Variables are represented by instances of class \( \text{var} \) that has methods \( \text{read}() \) and \( \text{write}() \), used to get the value and assign a new value to the variable. Top-level variables need to be handled differently, since they are mapped to platform specific APIs. This mapping is done by overriding \( \text{read}() \) and \( \text{write}() \) methods in a derived class of \( \text{var} \) by the back-end, without modifying the code produced by the front-end. **Differential constraint.** A differential constraint \( D \) of the form \( \dot{x} = f_D x \) declares that a variable \( x \) should evolve continuously at a rate given by the expression \( f_D \) over variables which may be continuous. Theoretically, this requires evaluation of the expression \( f_D \) and valuation of the variable \( x \) at every infinitesimal period. We approximate this specification into an assignment statement that is executed at every period to increment the variable in proportion to the length of the period, which is given by the parameter \( \text{delta} \) of the function. For example, Figure 5 shows the code for differential equations of the mode UpDown \( (x = 3v; y = \text{dir} \cdot 3v) \) given in Figure 3. This approximation, known as Euler’s method, is efficient to compute and produces good results for our models. More advanced, but more expensive methods can be used to improve accuracy [24]. **Algebraic constraint.** An algebraic constraint declares equations involving variables that should be satisfied at all times. In CHARON, an algebraic constraint \( A_v \in Cons \) of a variable \( x \) is specified in the form of an equation \( x = f_A \), where \( f_A \) is an expression containing other variables. Such constraints are translated into assignment statements, which are evaluated in the dependency order. This requires a dynamic dependency graph between equations that is updated by mode switches. A dependency tracking mechanism is implemented in the base class for modes and does not depend on the generation algorithm. We omit the implementation details due to lack of space. **Invariant.** An invariant \( I \in Cons \) declares a condition that should be satisfied at all times while the mode is active. In general, violation of an invariant means that the implementation is not faithful to the specification, or the model is infeasible. We translate each invariant to an assertion statement for run-time checking of correctness. Our framework also provides a means for static analysis of invariant violations as explained in Section 4. **Transition.** Transitions specify the control flow of the model. A transition \( t \in T \) is translated into an if-then statement where the if-block contains the guard and the then-block contains the optional discrete actions \( \alpha \) as shown separately in Algorithm 2 in Appendix. In addition to the guard, when the transition specifies a specific control point in the source mode, the if-block also checks the location of control. When the guard is true and control has been transferred to the source control point, the associated discrete action in the form of a set of assignments is executed. Note that this implementation enforces the transition is taken as soon as its enabling is detected. While the CHARON semantics does not impose this urgency (that is, an enabled transition does not have to be taken immediately), the urgent interpretation is more amenable to avoid switching errors as we will explain in the next section. After executing the discrete action, control is passed to the destination control point by invoking the corresponding method of the destination mode. Evaluation of guards also proceeds in the order of variable dependencies as described above. When more than one transition are enabled and there is no dependency between them, the code executes the transitions in the order chosen by the code generator randomly. Note that any arbitrary choice among enabled transitions is valid provided that the model is non-blocking (i.e., taking the transition does not lead to violation of the invariant). **Control Point.** The code generator produces for each entry point \( e \in E \) a corresponding method \( \text{e()} \) that implements the entry transition (see Algorithm 3 in Appendix). Each generated method checks the guard \( g \) performs the associated discrete actions \( \alpha \) when the ```cpp void UpDown::diff(double delta) { x += (3*v) * delta; // d(x) == 3*v; y += (dir*3*v) * delta; // d(y) == -dir*3*v; } ``` Figure 5: Generated code for differential equations. guard is true, and invokes the method corresponding to the destination entry point \( m'.e'(\) to trigger a cascade of entry transitions leading to a leaf mode. In addition, it updates the pointers in the data structure that rep- resents the variable dependency. On the other hand, for each exit point \( x \in X \), a method is generated that tests whether control has been transferred to the con- trol point. The method checks a flag \texttt{exitCode} that is set by the function \texttt{trans()} when an exit transition is performed. \textbf{Mode.} The class for modes has two methods, \texttt{continuousStep()} and \texttt{discreteStep()} that performs evaluation of the mode. Each method is invoked by the corresponding method of the parent mode. They are implemented in a base class \texttt{mode} since they are com- mon to all the modes. The class also contains run-time information such as the pointer to the currently active sub-mode. This pointer constitutes a linked list of active sub-modes from the top-level mode to some leaf mode. The methods of the top-level mode are invoked by the corresponding methods in the class for agents. \textbf{Agent.} We have implemented a single-threaded code generation scheme, since hybrid models generally have much finer granularity concurrency than that is sup- ported by the traditional multitasking mechanism of the operating system. That is, execution of concurrent sub- agents are interleaved at the granularity of the period \( \Delta \) in a single thread of execution. The top-level agent has a single method \texttt{update()} that is called periodically at every \( \Delta \) by the timer or a periodic task of the plat- form. It executes first the continuous steps and then the discrete steps of all the sub-agents. 3.2 Back-end The C++ code generated by the front-end can be com- piled into binary object code suitable for the target plat- form once a target compiler is given. The next step is to relate variables to specific objects in the target plat- form. For example, if the model denotes a joint of the head of the robot as a variable \( x \), we need to relate the variable \( x \) to the servo motor that controls the position of the head. In other words, we need to bind objects in the model to objects in the target platform just like high-level language compilers bind variables to memory addresses. While variables in programming languages are gen- erally bound only to memory addresses, variables in the model may be bound to a hardware register, an I/O port, or a parameter or the return value of a sys- tem call/API, as well as a memory address, depending on the abstraction level of the program execution envi- ronment. These bindings require extra code that \emph{glues} objects in the model and the platform. Compiled and linked together, the glue code allows the generated code to communicate with the platform transparently. The back-end generates the glue code when information on binding is given. We use a Makefile-like script to describe relationship between objects in the model and objects in the plat- form. Specifically, the script consists of colon-separated dependency relations and optional rules (i.e., code frag- ments) to relate the two. For example, a script that relates a variable \( x \) to an API function \texttt{syscall()} and a constant \texttt{HEAD_JOINT} used as a parameter is shown in Figure 6. The script shown in the figure lets the back- end translate write access to the variable \( x \) to a API call \texttt{syscall(HEAD_JOINT, x)} with an additional parame- ter \texttt{HEAD_JOINT} defined in the API header file \texttt{system.h}. This code fragment is appended without modifying the code generated from the front-end by creating a derived class that overrides the default \texttt{write()} method. 3.3 Modular Compilation The generated code is modular in the sense that each mode and agent is mapped to a C++ class that can be compiled separately and reused in different contexts. Each module can be reused not only for modeling pur- poses, but also at the code level. For example, the code for the walking process can be used in a larger applica- tion without modifying the original model or the gener- ated code. In addition to reusability, modularity is salient in two aspects. First, hybrid system models in many cases contain both the controller and environment, and they need to be decoupled since only the former is subject to code generation. Our code generator allows code generation only for selected modes/agents, and the de- coupling comes naturally. Second, modularity of the generated code is essential when the target platform is a distributed system consisting of multiple process- ing elements. We can port each module to a different execution environment, possibly using different target compilers and/or compilation options. The distributed modules can interact with each other when the API for communication is associated to the variable class, since variables are the only interface between modules. 4. DISCRETIZATION ERRORS In this section, we analyze code generation from a given hybrid system model, that is, we analyze the ac- curacy of the generated code with respect to the math- ematical semantics of the hybrid system. There are a variety of errors that are introduced to the system dur- ing the generation of discrete code with fixed sampling step-size. An overview of the various classes of errors is given in Section 1. Here, we only consider the er- rors due to the discretization of a hybrid system. For this we will assume, without loss of generality, that our hybrid system consists of \( n \) concurrent atomic agents \( A_1, A_2, \ldots, A_n \). For an atomic agent \( A \) we denote the set of all variables in any sub-mode assum- ing naming conflicts have been resolved by \( \forall A \), and the --- ```cpp #include "system.h" #define HEAD_JOINT "PRM:/r1/c1-Joint2:j1" void syscall(const char *, double); Figure 6: Script for binding. ``` set of valuations of these by $\Sigma_A$. An active mode of an agent $A$ consist of a path from the top-level mode of the agent $A$ to some leaf mode. A state of an atomic agent then consists of an active mode and a value to all its variables $V_A$. The set of states for an atomic agent $A$ is denoted by $X_A$, and the initial set of states is denoted by $X_A^0 \subseteq X_A$. We can then define the set of active constraints $\text{Cons}(M')$ given a state $x = (M', v)$ as the union of all the constraints in all the modes in $M'$, the set of active invariants $I(M')$ as the union of all the invariants in $M'$, and the set of active transitions $T(M')$ as the set of all the transitions of modes in $M'$, such that the source of the transition is an exit control point of a mode in $M'$. We denote the set of valuations of $V_A$ that satisfy all invariants of an active mode $M'$ with $I(M') \subseteq 2^{\Sigma_A}$. We want to check feasibility of the code generation task for a sampling period $\Delta$. The environment of an agent plays a central role in determining this kind of feasibility. We consider closed system of agents, assuming without loss of generality that naming conflicts have been resolved. We define the set of globally active modes $M_A$ for an agent $A = \{A_1, \ldots, A_n\}, V_A$, where each agent $A_i$ is atomic, as the cross-product of the active modes of its sub-agents. The set of all variables of $A$ denoted by $V_A$ is the union of all variables of its sub-agents, and the set of valuations of all variables is denoted by $\Sigma_A$. A state of the agent $A$ then consists of a globally active mode and an evaluation of all its variables. The set of all states of an agent $A$ is denoted by $X_A$, and the set of initial states is denoted by $X_A^0 \subseteq X_A$. The set of globally active constraints $\text{Cons}(M')$ given a state $x = (M', v)$ is the union of all active constraints of its sub-agents, the set of globally active invariants $I(M')$ is the union of all active invariants of its sub-agents, and the set of globally active transitions $T(M')$ is the union of all active transitions of its sub-agents. We call a function $\Phi : \Sigma_A \times \mathbb{R}_{\geq 0} \rightarrow \Sigma_A$ an admissible flow for the globally active mode $M^t$, if $\forall v \in \Sigma_A : \Phi(v, 0) = v$ and $\Phi(v, t)$ is a solution to all globally active algebraic and differential constraints in $M'$. We define a fixed step-size simulator for a given hybrid system as a first step towards the generated code. The fixed step-size simulator with period $\Delta$ for a given CHAON model can be seen as a computable approximation of the mathematical hybrid system model. Given an admissible initial state of an agent, we evaluate the behavior of an agent at time points $0, \Delta, 2\Delta, 3\Delta, \ldots$ As mentioned earlier, we assume that the dependency graph of atomic agents based on their globally active transitions is acyclic. Definition 1. A fixed step-size simulator with period $\Delta$ given a closed agent $A = (SA, V)$ of atomic sub-agents $SA = \{A_1, \ldots, A_n\}$ computes a potentially partial function $f_A : \mathbb{N} \rightarrow X_A$, The function $f_A$ is defined as $f_A(0) \in X_A^0$, and $f_A(k + 1) = f_A(f_A(k))$, with 1. $f_A(M, v) = (M, \Phi(v, \Delta))$ where $\Phi$ is an admissible flow in $M$, such that $\forall t \in [0, \Delta] : \Phi(v, t) \in I(M)$; and 2. there exists an admissible ordering $\sigma : \{1, \ldots, n\} \rightarrow SA$, that corresponds to a full ordering of the partial order given by the dependency graph of atomic agents based on their active transitions, such that one of the following two evaluations is used for $1 \leq i \leq n$: (a) if the invariants of the active modes $M_{\sigma(i)}$ of atomic sub-agent $A_i$ are not violated, then $f_i(M, v) = f_{i-1}(M, v)$; or (b) if there exists an enabled active transition $t \in T(M_{\sigma(i)})$, that is guard$(v) = \text{true}$, where $t$ is switching to the globally active mode $M'$, then $f_i(M, v) = (M', \text{actions}(f_{i-1}(M, v)))$. Please note that a fixed step-size simulator with period $\Delta$ for a non-blocking hybrid system $H$ may dead-lock. In fact, there are non-blocking hybrid systems such that there is no step-size $\Delta$, that would produce a non-blocking fixed step-size simulator. It should also be noted that this definition describes the computation of a function with non-deterministic choices. A fixed step-size simulator thus can compute a set of admissible functions according to this definition. The goal of the forthcoming analysis is to assure the feasibility of computing one such admissible function using the generated code. Lastly, it should be noted that the invariant of some active mode may be violated in the case 2(b). Part 2 models instantaneous transitions jumps between modes, which are allowed to pass through intermediate, zero time invariant violations. However, for a non-blocking behavior of the fixed step-size simulator, one needs to assure that $f_A(k)$ is not violating any invariant for all $k \in \mathbb{N}$. This model assumes that the simulator can sense the world, compute necessary updates, and act accordingly all in zero time. This is not a realistic model of embedded systems. An embedded system needs to accommodate a time delay for sensing the world, as well as computation and execution time. We will describe properties of the fixed step-size simulator though as a first approximation of a model for our generated code. We define a class of hybrid systems for which we can prove that a fixed step-size simulator is an appropriate execution model. We denote an execution to be appropriate if it corresponds to a valid trace of the hybrid system constrained to sampling points. We now define a class of closed agents for which we can show that it can be faithfully simulated by the aforementioned fixed step-size simulator. Intuitively, we consider the class of closed agents for which guards and invariants over continuously updated variables overlap for a duration greater than the sampling period. We use the non-determinism of the continuous flow to allow a simulation to switch modes at discrete time points. We define a function $\text{Post} : 2^{\Sigma_A \times \mathbb{R}_{\geq 0}} \rightarrow 2^{\Sigma_A}$ for an admissible flow $\Phi$ of an agent $A$ of atomic sub-agents as: $\text{Post}(A, x, t) = \{v \in \Sigma_A | \exists x \in X, 0 \leq t \leq \tau : \Phi(x, t) = v\}$. Definition 2. Given a globally active mode $M$ for a closed agent $A = (SA, V)$ of atomic sub-agents $SA = \{A_1, \ldots, A_n\}$ and an admissible flow $\Phi$, define the guards set $G \subseteq I(M)$ as the set of valuations of $V_A$ such that at least one globally active transition $t$ is enabled. A globally active mode $M$ for the agent $A$ is called an $\varepsilon$-lookahead mode, iff $$\text{Post}_t((\mathcal{Z}(M) \setminus \mathcal{G}, \varepsilon) \subseteq \mathcal{Z}(M)).$$ An $\varepsilon$-lookahead agent $A = (\mathcal{S}, \mathcal{A}, \mathcal{V})$ is a closed agent of atomic sub-agents such that all its globally active modes are $\varepsilon$-lookahead modes. It can be shown that a $\Delta$-lookahead agent can be faithfully simulated by a fixed step-size simulator with period $\Delta$; that is, the fixed step-size simulator as defined above computes a trace at steps 0, 1, 2, 3, ... that corresponds to a real trace of the $\Delta$-lookahead agent at the time points 0, $\Delta$, $2\Delta$, $3\Delta$, ... It is clear that urgent switching, which is taking a transition whenever a guard is enabled, guarantees non-blocking simulation using period $\Delta$. **Theorem 1.** A non-blocking $\Delta$-lookahead agent $A$ can be faithfully simulated by a fixed step-size simulator with period $\Delta$; that is for any admissible trace $r_A : \mathbb{R}_{\geq 0} \rightarrow \mathcal{X}_A$ of the $\Delta$-lookahead agent $A$ there exists a simulation trace $f$ that can be computed by a fixed step-size simulator, such that $\forall k \in \mathbb{N} : r(k\Delta) = f(k)$. Although Theorem 1 guarantees faithful simulation of a $\Delta$-lookahead agent, it does not mean that generated code embedded in a physical system will produce a faithful trace. One still needs to address issues such as timing delays introduced through sensing, computation and actuation. However, if we consider the case that we are trying to discretize an agent using a period $\Delta$, then the agent is not a $\Delta$-lookahead agent, it is apparent that even a fixed step-size simulator cannot guarantee a faithful simulation if condition (1) is not met for some reachable pair of guards set and invariant set. The condition (1) can be tested efficiently for systems with linear continuous dynamics using over-approximations [7]. Additionally, it should be noted that it is enough to prove that a mode is an $\varepsilon$-lookahead mode, if we can show that all pairs of active transition guard sets and invariant sets exhibit a big enough overlapping following an analogous definition. This modular proof technique is used in Section 5 to show feasibility of the code generation approach for Sony’s robotic dog AIBO. 5. CASE STUDY To apply our model-based approach to a real system, we used Sony’s four-legged robot, AIBO, as our experimental platform. The robot is a typical example of a hybrid system, consisting of analog devices for input and output, and a digital control system to control the device. The control system is an embedded computer based on a MIPS microprocessor running at 384 MHz, and equipped with 32 MB main memory and 16 MB flash memory. The robot contains servo motors controlling position of the joints in the legs and the head, an LED display to simulate emotional expression, a speaker for voice, and input devices such as camera, microphones, and touch sensors. In this study, we are using the servo motors to make the dog walk and the touch sensors to detect ground contact. Applications can actuate motors so that the joints are positioned at a desired angle by sending a message containing the command. The system can process a vector of commands to the motors as frequently as once every eight milliseconds (i.e., $\Delta \geq 0.008$). A typical program for the robot is coded as a C++ class that contains methods invoked whenever a message arrives to the object. These methods typically implement a finite state machine that determines a behavioral mode of the system. In each state, it composes and sends a message containing the desired angle of each joint that is determined by the dynamics of the current state and the elapsed time since the last update of the joint. Since there is no explicit means to deal with time and dynamics in C++, code becomes easily awkward and hard to understand. In addition, the message-driven execution style tends to cause code for iterative jobs unstructured because control of execution leaves at every iteration. As the behavior becomes more complicated, code may become unmanageably huge for hand coding and debugging. In contrast, Charon supports hierarchical state machines, explicit time manipulation, differential equations describing dynamics, and static/dynamic analysis for correctness, making it ideal for modeling event and time synchronized behavior of robots. Figure 7 shows an example message handling routine that utilizes the generated code replacing hand-coded if-then-else statements and joint value calculation routines. The method $\text{update()}$ traverses active modes to trigger evaluation of equations, which assign a new value to the left-hand side variable. As explained in Section 3, the assignment operator is redefined such that it triggers a call to the method $\text{write()}$, which can be overridden by a customized function that finally calls platform specific APs as shown in the figure. To demonstrate modularity of the generated code, we combined a sample program that comes with the official SDK for the robot [28], with code generated from a Charon model. The original sample program moves the legs to a set position and then controls the position of the head towards an object (pink ball). We added our walking model to this program, by slightly modifying the if-then-else statements of the original program such that it has an additional state that invokes the generated Charon code. The dog then tracks the ball while walking. The error analysis as it has been described in section 4 can be used to show that our walking model can be faithfully simulated on the mathematical model of a fixed step-size simulator. Consider, for example, the sub-mode $\text{UpDown}(-1)$: The invariant $y>y_{\text{yLimit}}$; $y<y_{\text{yUpperLimit}}$ and the guard $y<y_{\text{yLimit}}$ overlap assuming that $y_{\text{yLimit}} \leq y_{\text{yLimit}} \leq y_{\text{yUpperLimit}}$, while the dynamics are $\dot{y} = -3\dot{y}$. Clearly, a fixed step-size simulator with time step $\Delta$, such that $3\Delta \leq y_{\text{yLimit}}$ will guarantee a faithful simulation. On the other hand, if $y_{\text{yLimit}} - y_{\text{yLimit}}$ is too small for a given $\Delta$, a fixed step-size simulator cannot guarantee a faithful discretization, when $3\Delta \leq y_{\text{yLimit}} - y_{\text{yLimit}}$. Notice void Walk::Ready(const OReadyEvent& event) { rgn = FindFreeRegion(); charon->update(DELTA); subject[event.SbjIndex()] -> SetData(rgn); subject[event.SbjIndex()] -> NotifyObservers(rgn); } class joint : public var { public: joint(int id) { this->id = id; } virtual void write(double value) { SetJointValue(rgn, id, value, newValue); } private: int id; }; Figure 7: Generated code. the inherent duality of this approach: A given time-step suggests a minimum guard-invariant overlap, while a given overlap suggests a maximum time-step. Also notice that if the model is deterministic (\(y_{lift} - y_{limit} = 0\) in this example), a fixed step-size simulator is unrealizable because it implies that \(\Delta\) should be zero. 6. CONCLUSION We presented a framework for automatic code generation of embedded software from high-level hybrid systems models and its implementation for a robotic platform. We believe that a model-based approach for embedded software development is beneficial for complex hybrid systems. Traditionally, software development for robot control includes a lot of hand-crafting to ensure correct timing and desired performance. Furthermore, debugging is more difficult because reasoning is done at the level of code, rather than at the level of the abstract model. In contrast, automatic code generation should result in faster development with higher quality code since it eliminates errors, which are often the result of manual coding. In addition, it is easy for the designer to concentrate on higher-level design issues, such as more efficient walking style. We spent only a few days to make the walking dog example work, even though we have very little experience of robot programming and walking is known to be one of the complex behaviors to program. Several aspects of the code generation framework have been left for future work. First direction concerns ensuring adequate performance of the generated code to satisfy real-time constraints. In our target platform, performance requirement was that the code should be executed once every 8 msec. The computation power provided by the embedded computer inside the robot was sufficient in our experiment. However, as the model becomes complicated and requires more computation power, code optimization can be an important issue. Second challenge is more systematic generation of the glue code that connects platform-independent generated code to the target platform. Ideally, we need a platform specification language that will capture, in addition to the platform API, the resources of the platform such as the number and types of processors, available memory, communication bandwidth, etc. The code generation back-end can use this information to generate more efficient code and consider various implementation trade-offs. Third issue concerns better understanding of the relation between continuous model and discretized code. In this paper, we considered errors under the assumption that sensing, computation and actuation are performed instantaneously at the beginning of each period. For an execution model with total period \(\Delta\) that explicitly includes timing delays we need to require a \((2\Delta)\)-lookahead hybrid system. Assuming that sensing, computation and actuation can be performed within \(\Delta\), the system reacts to inputs in at most \(2\Delta\) time-units. To illustrate this, assume a non-zero time delay \(\delta_S\) for sensing. The sensed inputs representing the values at some time \(k\Delta\) are available at time \(k\Delta + \delta_S\) only. If a computation delay of \(\delta_C\) is assumed, the system can react only at time \(k\Delta + \delta_S + \delta_C\). If no guard is enabled at time \(k\Delta\), this implies that the system could react to a transition only at time \((k + 1)\Delta + \delta_S + \delta_C\). Given that \(\delta_S + \delta_C \leq \Delta\), to be safe, we require a \((2\Delta)\)-lookahead system. Finally, the framework needs to be extended to multi-threaded and multi-processor code generation. We described generation of single-threaded implementations. We have also implemented multi-threaded implementation with tight synchronization between threads. It is interesting to consider generation of multi-threaded code with different periods for different tasks. This will require us to explore the scheduling of agent threads and more sophisticated error analysis. Multi-processor code generation is a more long-term goal and will require us to map shared variables of the model into message passing and consider communication delays. Acknowledgements We would like thank the members of the Hybrid Systems Group of the University of Pennsylvania for their various contribution to the \textsc{Charon} framework. Special thanks go to Yerang Hur, who helped us during the initial development of the code generator, and to Jim Ostrowski, Pradyumna Mishra, and Sachin Chitta of the GRASP Lab, for their help with the AIBO platform. 7. REFERENCES \(^1\)A similar \((2\Delta)\) bound has been derived for the implementability of switching behaviors on PLCs (programmable logic controllers), where \(\Delta\) is the upper bound on the length of a PLC cycle consisting of polling inputs, computation and delivery of outputs using a timed-automata variant framework [19]. APPENDIX: Code Generation Algorithm Algorithm 1 CodeGen (M = (E, X, V, SM, Cons, T)) GenStmt("class", M, "public mode"); /* sub-modes */ for all m ∈ SM do GenStmt("mode", m); if m is not visited then CodeGen(m); /* recursive call */ /* differential equations */ GenStmt("for all variables /* differential equations GenStmt("for all algebraic equations /* sub-modes GenStmt("for all invariants /* invariants GenStmt("for all transitions and control points */ CodeGenCtrlPoint(T); /* Algorithm 2 */ CodeGenCtrlPoint(E, X); /* Algorithm 3 */ Algorithm 2 CodeGenTrans (T) GenStmt("if (h == m, "&", g, ")"); for all a ∈ o do GenStmt(a = m); /* discrete actions */ for all a ∈ o do GenStmt(a = m); /* update dependency links */ for all t = (m, x) do GenStmt(t = m, "e", m); /* entry transition from m.x to m.e */ for all a ∈ o do GenStmt(a = m); /* entry transition */ for all t = (m, x) do GenStmt(t = m, "e", m); /* set exit code */ GenStmt("return null"); for all t = (m, x) do GenStmt(t = m, "e", m); /* update dependency links */ for all a ∈ o do GenStmt(a = m); /* remove dependency links */ for all t = (m, x) do GenStmt(t = m, "e", m); /* set exit code */ GenStmt("return null"); Algorithm 3 CodeGenCtrlPoint (E, X) /* entry point */ for all e ∈ E do GenStmt("void", e, "()") /* reset exit code */ GenStmt("exitCode = null"); for all t = (m, e) do GenStmt(t = m, "e", m); /* destination entry */ GenStmt("return exitCode = "") for all a ∈ o do GenStmt(a.LHS, "a", a.RHS); /* update dependency links */ for all a ∈ o do GenStmt(x = m, "e", m); /* remove dependency links */ for all t = (m, x) do GenStmt(t = m, "e", m); /* remove dependency links */ for all a ∈ o do GenStmt(a = m); /* remove dependency links */ for all e ∈ E do GenStmt("void", e, "()") /* test if the mode exited through x */ GenStmt("return exitCode = ""
{"Source-Url": "https://repository.upenn.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1079&context=cis_papers", "len_cl100k_base": 13330, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 49899, "total-output-tokens": 14508, "length": "2e13", "weborganizer": {"__label__adult": 0.00038313865661621094, "__label__art_design": 0.0003876686096191406, "__label__crime_law": 0.00033473968505859375, "__label__education_jobs": 0.0005435943603515625, "__label__entertainment": 5.894899368286133e-05, "__label__fashion_beauty": 0.00016105175018310547, "__label__finance_business": 0.0002149343490600586, "__label__food_dining": 0.0004112720489501953, "__label__games": 0.0007395744323730469, "__label__hardware": 0.0020904541015625, "__label__health": 0.0004854202270507813, "__label__history": 0.0002627372741699219, "__label__home_hobbies": 0.00016105175018310547, "__label__industrial": 0.0005645751953125, "__label__literature": 0.00020802021026611328, "__label__politics": 0.0002808570861816406, "__label__religion": 0.0004863739013671875, "__label__science_tech": 0.0218658447265625, "__label__social_life": 6.920099258422852e-05, "__label__software": 0.00372314453125, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.00039076805114746094, "__label__transportation": 0.0010395050048828125, "__label__travel": 0.00023066997528076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61008, 0.00704]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61008, 0.47495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61008, 0.89652]], "google_gemma-3-12b-it_contains_pii": [[0, 914, false], [914, 2152, null], [2152, 5759, null], [5759, 12282, null], [12282, 15444, null], [15444, 22222, null], [22222, 27479, null], [27479, 34097, null], [34097, 40109, null], [40109, 46884, null], [46884, 53452, null], [53452, 59120, null], [59120, 59120, null], [59120, 61008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 914, true], [914, 2152, null], [2152, 5759, null], [5759, 12282, null], [12282, 15444, null], [15444, 22222, null], [22222, 27479, null], [27479, 34097, null], [34097, 40109, null], [40109, 46884, null], [46884, 53452, null], [53452, 59120, null], [59120, 59120, null], [59120, 61008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61008, null]], "pdf_page_numbers": [[0, 914, 1], [914, 2152, 2], [2152, 5759, 3], [5759, 12282, 4], [12282, 15444, 5], [15444, 22222, 6], [22222, 27479, 7], [27479, 34097, 8], [34097, 40109, 9], [40109, 46884, 10], [46884, 53452, 11], [53452, 59120, 12], [59120, 59120, 13], [59120, 61008, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61008, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
8fb12d0f7e3325cc784850c23b6a996697c520e4
LEARNING TO SELECT EXAMPLES FOR PROGRAM SYNTHESIS Anonymous authors Paper under double-blind review ABSTRACT Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, that maps the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, it is commonly formulated as a constraint satisfaction problem, where input-output examples are encoded as constraints and solved with a constraint solver. A key challenge of this formulation is scalability: while constraint solvers work well with few well-chosen examples, constraining on the entire set of examples incurs significant overhead in both time and memory. In this paper we address this challenge by constructing a representative subset of examples that is both small and able to constrain the solver sufficiently. We build the subset one example at a time, using a trained discriminator to predict the probability of unchosen input-output examples conditioned on the chosen input-output examples, adding the least probable example to the subset. Experiment on a diagram drawing domain shows our approach produces subsets of examples that are small and representative for the constraint solver. 1 INTRODUCTION Program synthesis (or synthesis for short) is a special class of regression problems where rather than minimizing the error on an example dataset, one seeks an exact fit of the examples in the form of a program. Applications include synthesizing database relations (Singh et al. (2017)), inferring excel-formulas (Gulwani et al. (2012)), and compilation (Phothilimthana et al. (2016)). In these domains, the synthesizer was able to come up with complex programs consisting of branches, loops, and other programming constructs. Recent efforts (Ellis et al. (2015); Singh et al. (2017)) show an interest in applying the synthesis technique to large sets of examples, but scalability remains an open problem. In this paper we present a technique to select from a large dataset of examples a representative subset that is sufficient to synthesize a correct program, yet small enough to solve efficiently. There are two key ingredients to a synthesis problem: a domain specific language (DSL for short) and a specification. The DSL defines a space of candidate programs which serve as the model class. The specification is commonly expressed as a set of example input-output pairs which the candidate program needs to fit exactly. The DSL restricts the structure of the programs in such a way that it is difficult to fit the input-output examples in an ad-hoc fashion: This structure aids generalization to an unseen input despite “over” fitting the input-output examples during training. Given the precise and combinatorial nature of a synthesis problem, gradient-descent based approaches perform poorly and an explicit search over the solution space is required (Gaunt et al. (2016)). For this reason, synthesis is commonly casted as a constraint satisfaction problem (CSP) (Solar-Lezama (2013); Jha et al. (2010)). In such a setting, the DSL and its execution can be thought of as a parametrized function \( F \), which is encoded as a logical formula. Its free variables \( s \in S \) correspond to different parametrization within the DSL, and the input-output examples \( D \) are expressed as constraints which the instantiated program needs to satisfy, namely, producing the correct output on a given input. \[ \exists s \in S. \bigwedge_{(x_i, y_i) \in D} F(x_i; s) = y_i. \] The encoded formula is then given to a constraint solver such as Z3 (de Moura & Björner (2008)), which solves the constraint problem, producing a set of valid parameter values for \( s \). These values are then used to instantiate the DSL into a concrete, executable program. A key challenge of framing a synthesis problem as a CSP is that of scalability. While solvers have powerful built-in heuristics to efficiently prune and search the constrained search space, constructing and maintaining the symbolic formula over a large number of constraints constitute a significant overhead. For this reason, significant efforts were put into simplifying and re-writing the constraint formula for a compact representation (Singh & Solar-Lezama (2016); Cadar et al. (2008)). Without such optimizations, it is possible for a formula to exceed the computer’s memory. If one wishes to apply program synthesis to a sufficiently large dataset, there needs to be a way to limit the number of examples expressed as constraints. The standard procedure to limit the number of examples is counter example guided inductive synthesis, or CEGIS for short (Solar-Lezama et al. (2006)). CEGIS solves the synthesis problem with two adversarial sub-routines, a synthesizer and a checker. The synthesizer solves the CSP with a subset of examples rather than the whole set, producing a candidate program. The checker takes the candidate program and produces an adversarial counter example that invalidates the candidate program. This adversarial example is then added to the subset of examples, prompting the synthesizer to improve. CEGIS successfully terminates when the checker fails to produce an adversarial example. By iteratively adding counter examples to the subset, CEGIS can drastically reduces the size of the constraint constructed by the synthesizer, making it scalable to large domains. The subset of examples are representative in a sense that, once a candidate program is found over this subset, it is also correct over all the examples. However, CEGIS has to repeatedly invoke the constraint solver in the synthesis sub-routine to construct the subset, solving a sequence of challenging CSP problems. Moreover, due to the phase transition Gent & Walsh (1994) property of SAT formulas, there may be instances in the sequence of CSP problems where there are enough constraints to make the problem non-trivial, but not enough constraints for the solver to properly prune the search space, causing the performance of CEGIS to become extremely volatile. In this paper, we construct the representative subset in a different way. Rather than using the constraint solver as in CEGIS, we directly learn the relationships between the input-output examples with a neural network. Given a (potentially empty) subset of examples, the neural network computes the probability for other examples not in the subset, and grow the subset with the most “surprising” example (one with the smallest probability). The reason being if an input-output example has a low probability conditioned on the given subset, then it is the most constraining example that can maximally prune the search space once added. We greedily add examples until all the input-output examples in the dataset have a sufficiently high probability (no longer surprising). The resulting subset of examples is then given to the constraint solver. Experiments show that the trained neural network is capable of representing domain-specific relationships between the examples, and, while lacking the combinatorial precision of a constraint solver, can nonetheless finds subset of representative examples. Experiment shows that our approach constructs the sufficient subset at a much cheaper computational cost and shows improvement over CEGIS in both solution time and stability. 2 An Example Synthesis Problem To best illustrate the synthesis problem and the salient features of our approach, consider a diagram drawing DSL (Ellis et al. (2017)) that allows a user to draw squares and lines. The DSL defines a draw function, which maps a \((row, col)\) pixel-coordinate to a boolean value indicating whether the specified pixel coordinate is contained within one of the shapes. By calling the draw function across a canvas, one obtains a rendering of the image where a pixel coordinate is colored white if it is contained in one of the shapes, and black otherwise. Figure 1 shows an example of a draw function and its generated rendering on a 32 by 32 pixel grid. The drawing DSL defines a set of parameters that allows the draw function to express different diagrams, some of which are underlined in Figure 1(left). The synthesis problem is: Given a diagram rendered in pixels, discover the hidden parameter values in the draw function so that it can reproduce the same rendering. 1 Imagine a mostly empty Sudoku puzzle, the first few numbers and the last few numbers are easy to fill, whereas the intermediate set of numbers are the most challenging. def draw(row, col): # shape constructor shapes = [] for i in range(3): for j in range(3): offset_x = 10*i + 5*j + 5 offset_y = 15*i + 3*j + 5 s = square(2*offset_x, 2*offset_y) li = line(2*offset_x, 2*offset_y, 2*offset_x, 2*offset_y, False, True, True) li2 = line(2*offset_x, 2*offset_y, 2*offset_x, 2*offset_y, False, True, True) shapes += [s, li, li2] # inclusion check for s in shapes: if inside(s, row, col): return True return False Figure 1: Sketch of code to draw an image (left) and the generated image (right). Boxes are drawn around the many adjustable parameters in the code such as the number of iterations and offsets for the shapes. The synthesized drawing program is correct when its rendered image matches the target rendering exactly. Let $S_{\text{draw}}$ be the synthesized draw function and $\text{Target}$ be the target rendering: $$\text{correct}(S_{\text{draw}}) := \forall (\text{row}, \text{col}). \ S_{\text{draw}}(\text{row}, \text{col}) = \text{Target}[\text{row}][\text{col}]$$ Because of the many possible combinations of parameters for the program, this is a difficult combinatorial problem that requires the use of a constraint solver. Each of the pixel in the target render is encoded as an input-output pair $((\text{row}, \text{col}), \text{bool})$, which can be used to generate a distinct constraint on all of the parameters. For the 32 by 32 pixel image, a total of 1024 distinct constraints are generated, which impose a significant encoding overhead for the constraint solver. In this paper, we propose a algorithm that outputs a representative subset of input-output examples. This subset is small, which alleviates the expensive encoding overhead, yet remains representative of all the examples so that it is sufficient to constrain the parameter only on the subset. Figure 2 (left) shows the selected subset of examples: white and black pixels indicate chosen examples, grey pixels indicate unchosen ones. As we can see, from a total of 1024 examples, only 15% are selected for the representative subset. The representative subset is then given to the constraint solver, recovering the hidden parameter values in Figure 2 (right). The algorithm constructs the representative subset iteratively. Starting with an empty subset, the algorithm uses a neural network model to compute the probability of all the examples conditioned on the chosen examples in the subset. It then adds to the subset the least probable example, the intuition being the example with the lowest probability would best restrict the space of possible solutions. The process stops when all the examples in the dataset are given a sufficiently high probability. In the context of the drawing DSL, the process stops when the neural network is sufficiently confident in its reconstruction of the target rendering given the chosen subset of pixels Figure 2 (middle). The rest of the paper elaborates the specifics of our approach. 3 Examples Reduction The crux of our algorithm is an example selection scheme, which takes in the set of examples and outputs a small subset of representative examples. Let \( D' \subseteq D \) be a subset of examples. Abusing notation, let us define the consistency constraint \( D'(s) := \bigwedge_{(x_i, y_i) \in D'} F(x_i; s) = y_i \), that is to say, the parameter \( s \) is consistent with all examples in \( D' \). We define the smallest sufficient subset as: \[ D^* = \arg\min_{D' \subseteq D} |D'| \quad \text{s.t.} \quad \forall s \in S. \quad D'(s) \Rightarrow D(s). \] \( D^* \) is sufficient in a sense any parameter \( s \) satisfying the subset \( D^* \) must also satisfy \( D \). Finding the exact minimum sized \( D^* \) is intractable in practice, thus we focus on finding a sufficient subset that is as close in size to \( D^* \) as possible. 3.1 Examples Reduction with a Count Oracle In this subsection we describe an approximate algorithm with a count oracle \( c \), which counts the number of valid solutions to a subset of examples: \[ c(D') := |\{s \in S | D'(s)\}|. \] This algorithm constructs the subset \( D' \) greedily, choosing the example that maximally restricts the solution space. \[ \text{D'} = \{\} \] while True do \[ (x_i, y_i) \leftarrow \arg\min_{x_j, y_j} c(D' \cup \{(x_j, y_j)\}) \quad \text{# selection criteria} \] if \( c(D') = c(D' \cup \{(x_i, y_i)\}) \) then \[ \text{return: } D' \] else \[ D' \leftarrow D' \cup \{(x_i, y_i)\} \] end end Algorithm 1: An example reducing algorithm with a count oracle Claim 1: Algorithm 1 produces a subset \( D' \) that is sufficient, i.e. \( \forall s D'(s) \Rightarrow D(s) \). Proof 1: The termination condition for Algorithm 1 occurs when adding any example to \( D' \), the counts remain unchanged \( c(D') = c(D' \cup \{(x, y)\}) \), \( \forall (x, y) \in D \). As \( D'(s) \) is defined as a conjunction of satisfying each example, \( c \) can only be monotonically decreasing with each additional example, as more solutions become invalidated: \( c(D') \geq c(D' \cup \{(x, y)\}) \). At termination, equality occurs for every additional \( (x, y) \in D \), where no more solutions are invalidated, thus we have the sufficiency condition \( \forall s. D'(s) \Rightarrow D(s) \). Claim 2: Algorithm 1 produces a subset \( D' \) that is \( 1 - \frac{1}{e} \) optimal Proof Gist: To show this, we need to show the count function \( c(D') \) is both monotonic and sub-modular [Nemhauser et al., 1978]. We have already shown monotonicity in the previous proof, for the sub-modularity proof see appendix. To use Algorithm 1, one needs to solve the model counting problem [Gomes et al., 2008] for the count function \( c \), which is impractical in practice. We now aim to resolve this issue by adopting an alternative selection criteria. 3.2 Example Selection without the Count Oracle The selection criteria in Algorithm 1 uses the count oracle \( c \), which is impractical to compute in practice, in this sub-section, we develop an alternative selection criteria that can be approximated efficiently with a neural network. Let \( D' = \{(x^{(1)}, y^{(1)}) \ldots (x^{(r)}, y^{(r)})\} \) where \( (x^{(j)}, y^{(j)}) \) denotes the \( j^{th} \) input-output example to be added to \( D' \). We define the selection probability: \[ Pr((x, y) | D') := Pr(F(x; s) = y | D'(s)) \] Note that \( Pr((x, y) | D') \) is not a joint distribution on the input-output pair \((x, y)\), but rather the probability for the event where the parameterized function \( F(\cdot; s) \) maps the input \( x \) to \( y \), conditioned on the event where \( F(\cdot; s) \) is consistent with all the input-output examples in \( D' \). We will now show that one can use \( Pr((x, y) | D') \) as the selection criteria rather than the count oracle in Algorithm 1. **Claim:** Under a uniform distribution of parameters \( s \sim unif(S) \), \[ \text{argmin } c(D' \cup \{(x, y)\}) = \text{argmin } Pr((x, y) | D') \] **Proof:** See appendix. To use \( \text{argmin}_{(x,y)} Pr((x, y) | D') \) as a selection criteria to grow the subset \( D' \), one needs a corresponding termination condition. It is easy to see the right termination condition should be \( \min_{(x,y)} Pr((x, y) | D') = 1 \): when all the input-output examples are completely determined given \( D' \), the subset is sufficient. ### 3.3 Approximating Selection Probability with a Neural Network We now describe how to model \( Pr((x, y) | D') \) with a neural network. For the scope of this work, we assume there exists an uniform sampler \( s \sim unif(S) \) for the possible parameters, and that the space of possible input and output values are finite and enumerable \( \text{dom}(x) = \hat{x}_1 \ldots \hat{x}_N \), \( \text{dom}(y) = \hat{y}_1 \ldots \hat{y}_M \). We will first describe a count-based approach to approximate \( Pr((x, y) | D') \), then describe how to model it with a neural network to achieve generalization properties. For the count-based approximation, we sample a subset of input values \( X' = \{x^{(1)}, \ldots, x^{(r)}\} \), and a particular input value \( x \notin X' \). We sample a parameter \( s \in S \) and evaluate the parameterized function, \( F(\cdot; s) \), on each of the input in \( X' \), obtaining output values \( F(x^{(1)}; s) = y^{(1)}, \ldots, F(x^{(r)}; s) = y^{(r)} \), we also evaluate the function on \( x \), obtaining \( F(x; s) = y \). Let \( \hat{c} \) denote the empirical count, we have, after sufficient number of samples: \[ Pr((x, y) | D') \approx \frac{\hat{c}(F(x^{(1)}; s) = y^{(1)}, \ldots, F(x^{(r)}; s) = y^{(r)}, F(x; s) = y) \cdot c(F(x^{(1)}; s) = y^{(1)}, \ldots, F(x^{(r)}; s) = y^{(r)})}{\hat{c}(F(x^{(1)}; s) = y^{(1)}, \ldots, F(x^{(r)}; s) = y^{(r)})} \] The issue with the count-based approach is that we need sufficient samples for any subset of inputs, with a total number of \( 2^N \) subsets where \( N = |\text{dom}(x)| \). Therefore, we approximate \( Pr((x, y) | D') \) with a neural network. The neural network is set-up similarly to a feed-forward auto-encoder with \( N \) input neurons \( \mathcal{Y}_1 \ldots \mathcal{Y}_N \) and \( N \) output neurons \( \mathcal{Y'}_1 \ldots \mathcal{Y'}_N \). That is to say, we enumerate over the (finite set of) distinct input values \( \hat{x}_1 \ldots \hat{x}_N \), creating a corresponding input and output neuron each time. Each input neuron \( \mathcal{Y}_i \) can take on \( 1 + M \) different values where \( M = |\text{dom}(y)| \), and each output neuron \( \mathcal{Y'}_i \) can take on \( M \) different values, both assume a 1-hot encoding. In this encoding, each input neuron \( \mathcal{Y}_i \) and output neuron \( \mathcal{Y'}_i \) can represent the value of running function \( F(\cdot; s) \) on the corresponding input value \( \hat{x}_i \), \( F(\hat{x}_i; s) \). The value \( F(\hat{x}_i; s) \in \text{dom}(y) \) is represented as a distinct class in \( 1 \ldots M \). Input neuron \( \mathcal{Y}_i \) can also represent an additional class, \( M + 1 \), representing the unknown value. Figure blah shows our neural network architecture, note that we do not suggest a specific neural network architecture for the middle layers, one should select whichever architecture that is appropriate for the domain at hand. During training time, given a particular sampled parameter \( s \) and a sampled subset of inputs \( X' = \{x^{(1)}, \ldots, x^{(r)}\} \), we set the input and output neurons values as follows: \[ \mathcal{Y}_i = \begin{cases} F(x_i, s) & \text{if } x_i \in X' \\ M + 1 & \text{otherwise} \end{cases} \quad \mathcal{Y'}_i = F(x_i, s) \] That is to say, the training task of the neural network is to predict the output values for all the possible input values in $\text{dom}(x)$ while given only a subset of input-output values for $D' = \{(x^{(i)}, F(x^{(i)}); s)| x^{(i)} \in X'\}$, the rest (where $x' \notin X'$) are encoded as unknowns. This is similar to a data completion task in Boltzmann machines (Ackley et al., 1985), with the difference that we directly compute the completion process rather than performing a gradient search for the most probable completion configuration. During use time, given a subset of input-output examples $D' = \{(x^{(1)}, y^{(1)}) \ldots (x^{(r)}, y^{(r)})\}$, we set for each $x^{(i)}$ its corresponding input neuron with value $y^{(i)}$, and set the value unknown for neurons whose corresponding input values that do not occur in $D'$. The neural network then computes the softmax values for all the $M$ classes in all the output neurons, obtaining $Pr((x, y)| D')$ for every possible input-output examples simultaneously. 3.4 TYING UP THE LOOSE ENDS WITH CEGIS In the previous subsections we described an examples reduction algorithm that builds the sufficient subset one example at a time by greedily selecting the least likely input-output example given the examples in the subset. We also showed how one can approximate the selection probability $Pr((x, y)| D')$ by training an auto-encoder like neural network. The remaining problem lies in the approximate nature of the neural network: It cannot perfectly model the probability $Pr((x, y)| D')$, and thus we need to use a different termination condition for our example reduction algorithm. Rather than terminating the algorithm when $\min_{(x,y)} Pr((x, y)| D') = 1$, we adopt a weaker termination condition $\text{mean}_{(x,y)} Pr((x, y)| D') \geq \beta$, terminating when the average probability of all the examples are greater than a certain threshold $\beta$. By approximating the selection probability and relaxing the termination condition, one can no longer guarantee that the subset produced by our reduction algorithm is sufficient. That is to say, there may be solutions $s$ which satisfies the subset $D'$ yet fails to satisfy the entire set of examples $D$. We can remedy this problem by leveraging CEGIS, which guarantees a solution $s$ that is correct on all the examples in $D$. Like Algorithm 1, CEGIS also maintains a subset of examples $D'$ and grows it one at a time. The difference being the selection criteria and termination condition. In CEGIS, two subroutines, synthesize and check, interacts in an adversarial manner to select the next example to add to the subset: The routine synthesize uses a constraint solver to produce a candidate parameter $s$ that satisfies the current $D'$; The routine check checks the candidate $s$ against all the examples in $D$, and finds a counter example $(x_{\text{counter}}, y_{\text{counter}})$ that invalidates the candidate $s$. This counter example is added to $D'$, prompting the synthesizer to improve its solution. CEGIS terminates when no counter example can be found. Clearly, when CEGIS terminates, the resulting solution $s$ is correct on all the examples in $D$. By using a constraint solver in the synthesis step, and using the checker that checks against all the examples, CEGIS guarantees pruning of the solution space with each counter-example it adds to D' = {} while True do s = synthesize(S, D') (x_counter, y_counter) = check(s, D) if (x_counter, y_counter) == None then return: D' else D' = D' ∪ {(x_counter, y_counter)} end end Algorithm 2: CEGIS the subset. The main drawback of CEGIS is that it requires repeated calls to the constraint solver in the synthesis step and no guarantees on how well the additional counter-example prunes the search space other than it invalidates the current candidate solution s. Our synthesis algorithm combines the example selections and CEGIS. First, example selection is run until the mean selection probability reaches a certain threshold β, then the resulting set of sampled examples are given to CEGIS as the starting set of counter examples. CEGIS then repeat- edly calls the constraint solver for candidate solutions, and checking each candidate solution against the entire example set D until a correct solution is found: # phase 1: examples selection D' = {} while mean((x, y) ∈ D Pr((x, y)|D') ≤ β do (x, y) ← argmin_{x', y'} Pr((x', y')|D') # selection criteria D' ← D' ∪ {(x, y)} end # phase 2: CEGIS while True do s = synthesize(S, D') (x_counter, y_counter) = check(s, D) if (x_counter, y_counter) == None then return: s else D' = D' ∪ {(x_counter, y_counter)} end end Algorithm 3: Synthesis with example selections By initializing CEGIS with a set of representative examples, CEGIS will be able to find the correct solution with fewer calls to the constraint solver, saving both overhead time and solving time. 4 EXPERIMENTS We perform a set of experiments measuring the overall speed and stability of our synthesis algorithm, and the representativeness of the subset of examples produced by the selection process. We evaluate our algorithm against 400 randomly generated images. For the experiment, the drawing function contains parameters that can generate a total of $1.31 \times 10^{23}$ possible programs. For each randomly generated image, the following synthesis algorithms are run: - full: all 1024 examples are added to the subset, solved once - rand: 10% of all examples are added to the subset, solved once - nn: the subset generated by our selection algorithm, solved once - CEGIS: the CEGIS algorithm where the check function return counterexamples in order • rCEGIS: the CEGIS algorithm where the check function return counterexamples at random • rand+CEGIS: initialize CEGIS with a random subset of 10% examples • ours: our synthesis algorithm described in Algorithm 3, initializing CEGIS with the subset produced by the selection algorithm Figure 4 shows the average time breakdown, the median and variance, and the number of examples selected for the different algorithms. Here, rand and nn are excluded because they are not guaranteed to synthesize a program that can perfectly reproduce the target render. On average, rand synthesizes a program that misses 10.1% of the pixels while nn misses 1.2%. Figure 4: The average time taken for each step of the algorithm (upper left). The spread of total time taken for each algorithm (upper right). The number of examples used in each algorithm (bottom). For the average time plot in Figure 4 (top left), we measure the breakdown for the different kinds of times: grey denotes the overhead time in constructing the constraints, slanted-stripes denotes the solving time, and vertical stripes denotes the time taken by the example selection algorithm. On average, our algorithm finishes the fastest, with cegis a close second. We remark that we achieve a similar solve time as the full algorithm, indicating the subset returned by our algorithm constrained the solver to a similar degree as constraining all the examples at once. In comparison, all the other algorithms have significantly longer solving time and shorter building time, indicating that these algorithms tend to under-constrain the synthesis problem, making it more difficult to solve. The drawback of our approach is the time it takes to produce the representative subset, which is around 6 seconds. This constitute as another form of overhead cost, but as we can see compared to the overhead cost of constructing the constraint, it is justified. For the median and variance plot in Figure 4 (top right) for average over-all time, we note cegis comes in first for smallest median time, while our algorithm is second place. However, we remark that our algorithm has a much smaller variance in over-all time, achieving higher stability than cegis. One salient feature of this plot is that although cegis and rcegis only differs by which counter-example is added to the subset (the “first” one versus a random one), this small difference results in a huge difference in the over-all time performance. We postulate that cegis is able to leverage the particular ordering of choosing the counter-examples are top-left to bottom-right, which tend to produce representative examples in the drawing DSL domain we have considered. By removing this particular ordering of choosing counter-examples, rcegis incurs a significant increase in solving time. For the number of examples plot in Figure 4 (bottom), we measure the average number of examples in the selected subset. For this plot, solid grey measures the size of the initial subset of examples, and stripped measures additional examples chosen by the cegis algorithm. We note that rcegis on average was able to solve the synthesis problem with the least number of examples. However, rcegis also performs the worst in term of over-all solving time, suggesting that while it is possible to generate a valid solution from a small subset of examples, it is likely the case where that subset is not sufficiently constraining so the solver cannot efficiently prune the search space. By comparison, both cegis and our approach, the top two performing algorithms in overall-time, selects many more examples for the representative subset. We note that although rand+cegis selects roughly 70% of the examples as our approach, the subset it selected are not representative. This is evident in the high over-all solving time for rand+cegis, especially in solving time, indicating the examples it selected are not constraining the solver in a helpful way. By contrast, the subset selected by our selection algorithm is almost perfect, with only 1.5 additional counter-examples needed from CEGIS to arrive at a correct solution that matches all the pixels. Overall, our algorithm provides a quick and stable solution over existing algorithms, and the subset that it provides is small and representative of the whole subset. 5 RELATED WORKS In recent years there have been an increased interest in program induction. Graves et al. (2014), Reed & De Freitas (2015), Neelakantan et al. (2015) assume a differentiable programming model and learn the operations of the program end-to-end using gradient descent. In contrast, in our work we assume a non-differentiable programming model, allowing us to use expressive program constructs without having to define their differentiable counter parts. Works such as (Reed & De Freitas, 2015) and (Cai et al., 2017) assume strong supervision in the form of complete execution traces, specifying a sequence of exact instructions to execute, while in our work we only assume labeled input-output pairs to the program, without any trace information. Parisotto et al. (2016) and Balog et al. (2016) learn relationships between the input-output examples and the syntactic structures of the program that generated these examples. When given a set of input-outputs, these approach use the learned relationships to prune the search space by restricting the syntactic forms of the candidate programs. In these approaches, the learned relationship is across the semantic domain (input-output) and the syntactic domain. In contrast, in our approach we learn a relationship between the input-output examples, a relationship entirely in the semantic domain. In this sense, these approaches are complimentary. ACKNOWLEDGMENTS Use unnumbered third level headings for the acknowledgments. All acknowledgments, including those to funding agencies, go at the end of the paper. REFERENCES Under review as a conference paper at ICLR 2018 APPENDIX **Claim:** Algorithm 1 produces a subset $D'$ that is $1 - \frac{1}{e}$ optimal **Proof:** To show this, we need to show the count function $c(D')$ is both monotonic and sub-modular \cite{Nemhauser1978}. We have already shown monotonicity. For sub-modularity, we need to show for subsets $A \subseteq B \subseteq D$: $$A \subseteq B \Rightarrow \forall (x,y) \in D. \ c(A) - c(A \cup \{(x,y)\}) \geq c(B) - c(B \cup \{(x,y)\})$$ To show this, we need to show the number of parameters $s$ invalidated by $(x,y)$ is greater in $A$ than that in $B$. Let $A'(s) := A(s) \land \neg\{(x,y)\}(s)$, the constraint stating that a parameter $s$ should satisfy $A$, but fails to satisfy $(x,y)$, similarly, let $B(s)' := B(s) \land \neg\{(x,y)\}(s)$. The count $c(A')$ indicates how many parameter $s$ becomes invalidated by introducing $(x,y)$ to $A$, i.e. $c(A') = c(A) - c(A \cup \{(x,y)\})$, similarly, $c(B') = c(B) - c(B \cup \{(x,y)\})$. Note that $A'$ and $B'$ are strictly more constrained than $A'$ due to $A \subseteq B$. Thus, there are more solutions to $A'$ than there are to $B'$, i.e. $c(A') \geq c(B')$, showing sub-modularity. **Claim:** Under a uniform distribution of parameters $s \sim \text{unif}(S)$, $$\arg\min_{(x,y)} c(D' \cup \{(x,y)\}) = \arg\min_{(x,y)} Pr((x,y)|D')$$ **Proof:** The probability $Pr((x,y)|D')$ can be written as a summation over all the possible parameter values for $s$: $$Pr((x,y)|D') := Pr(F(x; s) = y | D'(s)) = \sum_{s \in S} Pr(s | D'(s)) Pr(F(x; s) = y | s).$$ Note that under $s \sim \text{unif}(S)$, we have: $$Pr(s | D'(s)) = \begin{cases} \frac{1}{c(D')} & \text{if } D'(s) \\ 0 & \text{otherwise} \end{cases}.$$ And since $F(\cdot; s)$ is a function we have: $$Pr(F(x; s) = y | s) = \begin{cases} 1 & \text{if } F(x; s) = y \\ 0 & \text{otherwise} \end{cases}.$$ Thus the summation over all $s$ results in: $$\sum_{s \in S} Pr(s | D'(s)) Pr(F(x; s) = y | s) = \frac{c(D' \cup \{(x,y)\})}{c(D')}.$$ As $c(D')$ is a constant given $D'$ and is invariant under $\arg\min_{(x,y)}$, we have $\arg\min_{(x,y)} c(D' \cup \{(x,y)\}) = \arg\min_{(x,y)} Pr((x,y)|D')$ as claimed. **Drawing Program Examples** The following images are some examples of what can be synthesized with the drawing program. Each row has the target image on the left. Next to the target image is the observations chosen by the neural network followed by the neural network’s estimation of the image. The recovered parameters from the image are on the right. ![Recover parameters](image1) **SEQUENCE OF PREDICTION ESTIMATES** The following are a sequence of the neural network’s approximation of the render given its current observations. The sampling of observations is shown on the top and the corresponding neural network approximation is shown underneath it.
{"Source-Url": "https://openreview.net/pdf?id=B1CQGfZ0b", "len_cl100k_base": 8246, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42604, "total-output-tokens": 11224, "length": "2e13", "weborganizer": {"__label__adult": 0.00037169456481933594, "__label__art_design": 0.0004758834838867187, "__label__crime_law": 0.00030684471130371094, "__label__education_jobs": 0.0005908012390136719, "__label__entertainment": 8.124113082885742e-05, "__label__fashion_beauty": 0.0001900196075439453, "__label__finance_business": 0.00027298927307128906, "__label__food_dining": 0.0003237724304199219, "__label__games": 0.0006384849548339844, "__label__hardware": 0.0010547637939453125, "__label__health": 0.0004620552062988281, "__label__history": 0.00022304058074951172, "__label__home_hobbies": 0.00011157989501953124, "__label__industrial": 0.0004215240478515625, "__label__literature": 0.00030422210693359375, "__label__politics": 0.00022089481353759768, "__label__religion": 0.0004165172576904297, "__label__science_tech": 0.03350830078125, "__label__social_life": 7.259845733642578e-05, "__label__software": 0.005859375, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.0002872943878173828, "__label__transportation": 0.0005230903625488281, "__label__travel": 0.00016963481903076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39223, 0.02304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39223, 0.72995]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39223, 0.82531]], "google_gemma-3-12b-it_contains_pii": [[0, 3544, false], [3544, 8541, null], [8541, 11612, null], [11612, 14955, null], [14955, 19280, null], [19280, 22676, null], [22676, 25037, null], [25037, 27538, null], [27538, 31645, null], [31645, 35300, null], [35300, 36421, null], [36421, 38786, null], [38786, 39223, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3544, true], [3544, 8541, null], [8541, 11612, null], [11612, 14955, null], [14955, 19280, null], [19280, 22676, null], [22676, 25037, null], [25037, 27538, null], [27538, 31645, null], [31645, 35300, null], [35300, 36421, null], [36421, 38786, null], [38786, 39223, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39223, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39223, null]], "pdf_page_numbers": [[0, 3544, 1], [3544, 8541, 2], [8541, 11612, 3], [11612, 14955, 4], [14955, 19280, 5], [19280, 22676, 6], [22676, 25037, 7], [25037, 27538, 8], [27538, 31645, 9], [31645, 35300, 10], [35300, 36421, 11], [36421, 38786, 12], [38786, 39223, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39223, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
86fa9f1785cc526f2e130f0b8e3c4b4be25fdbe2
A Functional Geometric Approach to Distributed Support Vector Machine (SVM) Classification Author: Sofia KAMPIOTI Supervisor: As. Prof. Vasilios SAMOLADAS Abstract We live in the information age, and with every passing year, our environment becomes more and more heavily defined by data, leading to a major need for better decision-making models. The breakthroughs in data analytics have already seen through machine learning. Support vector machines (SVM) are a popular, adaptive, multipurpose machine learning algorithm with the ability to capture complex relationships between data points without having to perform difficult transformations. We study the problem of prohibitive communication costs that a centralized architecture implies if most of the data is generated or received on different remote machines. The past few years notable efforts have been made to achieve parallelism on the training procedure of machine learning models. We propose the use of Functional Geometric Monitoring (FGM) communication protocol which is used to monitor high-volume, rapid distributed streams to decrease the communication cost on a distributed SVM architecture. Our main goal is both to achieve centralized-like prediction loss and to minimize communication costs. In our proposal, the sklearn library, for centralized machine learning, is used in a distributed manner resulting in a notable speedup for the training procedure. Acknowledgements "Foremost, I would like to express my sincere gratitude to my supervisor Prof. Vasilis Samoladas for the continuous support of my Diploma thesis study and research, for his patience, motivation, enthusiasm, and immense knowledge. Besides my advisor, I would like to thank the rest of my thesis committee: Prof. Minos Garofalakis and Antonios Deligiannakis for their time. I would also like to thanks my fellow students, Ilias Balampanis, Edward Epure, Eftichia Seisaki, for there wonderful collaboration and support during my thesis process. Finally I would like to thanks my family and friends since this project would have been impossible without them. Contents 1 Introduction 4 1.1 Related Work ........................................... 4 1.2 Contribution ........................................... 5 1.3 Outline .................................................. 5 2 Theoretical Background 5 2.1 Machine Learning Basics ................................. 6 2.1.1 Types of learning algorithms ....................... 6 2.1.2 Classification ...................................... 10 2.2 Support Vector Machines (SVM) ......................... 12 2.2.1 Maximum Margin .................................... 13 2.2.2 Stochastic Gradient Descent ....................... 19 2.3 Functional Geometric Monitoring (FGM) ................ 21 2.4 Tools .................................................... 22 2.4.1 Scikit-Learn ........................................ 23 2.4.2 Dask ............................................... 23 2.4.3 Scikit-Learn and Dask Connection ................ 24 3 Implementation 24 3.1 Decentralized architecture ............................. 24 3.2 Safe Function ........................................... 25 3.3 Basic FGM Protocol for Learning ....................... 26 3.4 SVM-FGM protocol ....................................... 27 4 Experimental Results 34 4.1 Datasets ................................................ 34 4.2 Results .................................................. 35 5 Conclusions 44 1 Introduction In an age of ever-increasing information collection and the need to evaluate it, building systems that utilize the yet untapped and available resources is driving the development of more sophisticated distributed computing systems. Driven by this urgent need, and the fact that the demand for processing training data has outpaced the increase in computation power of computing machinery, distributing the machine learning workload across multiple machines, gain a lot of scientific interest. Unfortunately, a major role in distributed systems performance plays the communication cost, since communication is often the bottleneck of applications and so it directly relates to energy consumption, network bandwidth usage, and overall running time. Support Vector Machines (SVM) have a strong theoretical foundation and a wide variety of applications. On the other hand, the underlying optimization problems can be highly demanding in terms of run-time and memory consumption. Distributed scenarios emerge when data are captured in many places and their transport and storage to a unique location is undesirable. A rather straightforward procedure for achieving distribution in SVMs is a sort of distributed chunking technique where the result of the training procedure are exchanged with the other nodes. However, the amount of information that needs to be transmitted might rapidly make this approach unfeasible in real-world conditions. For this reason, this work focuses on reducing the communication cost and run time duration by implementing Functional Geometric Monitoring (FGM), a protocol that provides substantial benefits in terms of performance and scalability in monitoring problems [12]. This work aims to efficiently test distributed SVM in an online manner, with reduced communication, in a real-time system. We managed to use sklearn library, a library for centralized machine learning integration, for distributed online training and achieve rather encouraging results in terms of speedup and centralized like accuracy. 1.1 Related Work In the past few years, distributed training SVM models have gained a lot of interest. Driven by this concept, many different approaches came up, focusing each time in a different way to accomplish distribution. Particularly, [10] have proposed distributed technique for training SVMs in sensor net- works,[19] propose a communication avoiding SVM (CA-SVM) for shared memory architecture by combining several approaches like the cascade SVM, DC-SVM, where [11] casts SVM problem as a set of coupled decentralized convex optimization subproblems with consensus constraints imposed on the desired classifier parameters. This work contribution, though, focuses on reducing the communication cost of a distributed online SVM training process. 1.2 Contribution This study aims to utilize the advancements in the field of distributed stream monitoring for the problem of distributed machine learning classification and more precisely, Support Vector Machines. The main goal is to practically implement a functional geometric approach to this machine learning algorithm in a real distributed environment using python Dask. This work efficiently combines sklearn and Dask to integrate the SVM algorithm in a distributed online manner. Note that sklearn is a library for centralized machine learning, but this work proves that sklearn can be also used in a distributed system and perform equivalently. 1.3 Outline The rest of this section is focus on describing related work and the contribution of this work to the distributed SVM concept. Section 2 describe the theoretical bases, on which the implementation is based on. The main theoretical concepts are machine learning basics focusing on classification, Support Vector Machines (SVM) and the Functional Geometric Monitoring protocol (FGM). Section 3 focuses on the implementation and mainly how distributed SVM is combined with the FGM protocol to achieve speedup and reduction of the communication cost. Later on, section 4 explains the result of this combination and how this improves communication and archives speedup towards centralized structure. 2 Theoretical Background This section is dedicated to introducing the main theoretical concepts and research in which this work is based on. Support Vector Machines (SVM) [3], and Functional Geometric Monitoring [12] are, as previously mentioned, the main subjects of this project. So, this section describes the theory of both algorithms in a way that will help to understand the connection between the research and the implementation of this work. Note that the theoretical research, is not constrained into studying only the related work described in 1.1 but is also based to books [1],[9],[14] and Wikipedia [7] research. 2.1 Machine Learning Basics Machine-learning algorithms are responsible for the vast majority of artificial intelligence advancements and applications. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns, and make decisions with minimal human intervention. The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future, based on the examples that we provide. The discipline of machine learning employs various approaches to help computers learn to accomplish tasks where no fully satisfactory algorithm is available. Machine learning approaches are divided into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system. 2.1.1 Types of learning algorithms The types of machine learning algorithms differ in their approach, the type of data they input and output, and the type of task or problem that they are intended to solve. The following figure indicates the hierarchy of learning types. Supervised Learning is a task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value. This is usually written as a set of data \((x_i, t_i)\), where the inputs are \(x_i\), the targets are \(t_i\), and \(i\) runs from 1 to the number of input dimensions (see notation). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. Types of supervised learning algorithms include classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. Classification and regression are described in more detail later in this section. **Unsupervised Learning** is a conceptually different problem to supervised learning. Unsupervised learning is a type of machine learning that looks for previously undetected patterns in a data set with no pre-existing labels and with a minimum of human supervision. In contrast to supervised learning that usually makes use of human-labeled data, unsupervised learning, tries to identify similarities between the inputs so that inputs that have something in common are categorized together. The statistical approach to unsupervised learning is known as density estimation. Within unsupervised machine learning, there are several different approaches such as clustering and dimension reduction. Clustering is the assignment of objects to homogeneous groups while making sure that objects in different groups are not similar, when dimension reduction reduces the number of features under consideration, where each feature is a dimension that partly represents the objects. **Reinforcement learning** fills the gap between supervised learning and unsupervised learning. Reinforcement learning is usually described in terms of the interaction between some agent and their environment. The agent is the learner, and the environment is where it is learning, and what it is learning about. Reinforcement learning maps states or situations to actions to maximize a reward function. In particular, the algorithm knows the state (the current input), and the available actions, and it aims to choose the action that maximizes the reward. The reward is given as feedback to the algorithm to guide future actions considering that the methods that seem to work should be tried over and over again, until they are perfected or better solutions are found, and those that do not work must be discarded. 2.1.2 Classification Fundamentally, classification is about predicting a label, and regression is about predicting a quantity. Regression algorithms try to find the best fit line, which can predict the output more accurately in cases like weather prediction, house price prediction, and similar examples with continuous nature or real value outputs. On the other hand, classification algorithms try to find the decision boundary, which can divide the dataset into different classes such as identification of spam emails, speech recognition, identification of cancer cells, and any machine learning problem with discrete value as an output. As initially mentioned, this work is dedicated to managing discrete data and especially binary nature problems such as identification of spam emails, hence regression algorithms will not form a part of it. Consequently, there follows a more thorough description of classification algorithms in order to understand the main concept and some basic notation, required for the following sections. Classification problem consists of taking input vectors and deciding which of \( K \) classes they belong to, based on training from exemplars of each class. The most important point about the classification problem is that it is discrete — each example belongs to precisely one class, and the set of classes covers the whole possible output space. If only two classes are involved the classification is called \textit{binary}, otherwise, it is \textit{multiclass classification}. Since classification problems refer to different data sets in each individual problem, there are a variety of classification algorithms such as support vector machines, linear classifiers, quadratic classifiers, and more. Nevertheless, each and everyone shares the same fundamental concept that classification requires and which is hereinafter explained. There is a basic \textit{model} that distinguishes classification algorithms from other supervised learning algorithms. Firstly, the goal in classification is to take an input vector \( x \) and to assign it to one of \( K \) discrete classes \( C_k \) where \( k = 1, \ldots, K \). The classes are taken to be disjoint, at least in the most common scenario, so that each input is assigned to one and only one class. The input space is thereby divided into decision regions whose boundaries are called \textit{decision boundaries} or \textit{decision surfaces}. In this work, we consider linear models for classification, by which we mean that the decision surfaces are linear functions of the input vector \( x \) and hence are defined by \((D - 1)\)-dimensional hyperplanes within the \( D \)-dimensional input space. Note that from now on, any referred data set is linearly separable, meaning classes can be separated exactly by linear decision surfaces. In the case of two-class problems, is the binary representation in which there is a single target variable \( t \in \{0, 1\} \) such that \( t = 1 \) represents class \( C_1 \) and \( t = 0 \) represents class \( C_2 \). Value of \( t \) can be interpreted as the probability that the class is \( C_1 \), with the values of probability taking only the extreme values of 0 and 1. In the simplest case, where the model is linear in the input variables, an appropriate function \( y(x) \) is constructed whose values for new inputs \( x \) constitute the predictions for the corresponding values of \( t \) and therefore takes the form: \[ y(x) = w^T x + w_0, \text{ where } y \in \mathbb{R} \] Here \( w \) is called a weight vector, and \( w_0 \) (or sometimes \( b \)) is a bias. Practically, an input vector \( x \) is assigned to class \( C_1 \) if \( y(x) \geq 0 \) and to class \( C_2 \). otherwise. The corresponding decision boundary is therefore defined by the relation $y(x) = 0$. So, if $x$ is a point on the decision surface, then $y(x) = 0$, and so the normal distance from the origin to the decision surface is given by: $$\frac{w^T x}{||w||} = -\frac{w_0}{||w||}$$ Furthermore, note that the value of $y(x)$ gives a signed measure of the perpendicular distance $r$ of the point $x$ from the decision surface, as shown in the following figure. ![Illustration of the geometry of a linear discriminant function in two dimensions](image) Figure 5: Illustration of the geometry of a linear discriminant function in two dimensions Finally, such models have useful analytical and computational properties but that their practical applicability is limited by the curse of dimensionality. So, in order to apply such models to large-scale problems, it is necessary to adapt the basis functions to the data. Support vector machines (SVMs), discussed in the next section, address this by first defining basis functions that are centred on the training data points and then selecting a subset of these during training. ### 2.2 Support Vector Machines (SVM) Support vector machine (SVM) is the most popular classifier based on a linear discriminant function and hence it is a linear classifier. The main property that distinguishes SVM from other classification algorithms is that the determination of the model parameters corresponds to a convex optimization problem, and so any local solution is also a global optimum. SVMs are the so-called “non-parametric” models, meaning their “learning” (selection, identification, estimation, training) is a crucial issue since the parameters are not predefined and their number depends on the training data used. Generally, in SVM, a data point is viewed as a list of $p$ numbers and the goal is to separate such points with a $(p - 1)$-dimensional hyperplane. Since various hyperplanes can linear separate the given points, the final hyperplane is determined as the one that represents the largest separation between the classes. The rest of the section will provide some detail at the support vector machine, including the concept of maximum margin and Stochastic Gradient Descent. 2.2.1 Maximum Margin As previously mentioned, binary classifiers construct a linear model of the form: $$y(x) = w^T x + b$$ where $b$ is the bias (previously mentioned and as $w_0$). The training data set comprises $N$ input vectors $x_1,...,x_N$, with corresponding target values $t_1,...,t_N$ where $t_n \in -1,1$, and new data points $x$ are classified according to the sign of $y(x)$. Given the assumption that the training data set is linearly separable in feature space, there exist at least one pair of parameters $w$ and $b$ such that a function of the form (2.1) satisfies the inequality $y(x_n) > 0$ for points having $t_n = +1$ and $y(x_n) < 0$ for points having $t_n = -1$, so that $t_n y(x_n) > 0$ for all training data points. The equality $y(x) = 0$ defines the decision boundaries that help classify the data points also known as hyperplanes. Practically, there may exist many such solutions that separate the classes exactly, so there is a need to find the one that will give the smallest generalization error. The support vector machine approaches this problem through the concept of the margin, which is defined to be the smallest distance between the decision boundary and any of the samples as illustrated in Figure: Then the decision boundary is chosen to be the one for which the margin is maximized. Figure 6: Illustration of margin and support vectors Margin is defined as the perpendicular distance between the decision boundary and the closest of the data points, where the perpendicular distance of a point $x$ from a hyperplane is given by $\frac{\|y(x)\|}{\|w\|}$. Furthermore, the main interest is focused on solutions for which all data points are correctly classified, so that we satisfy $t_n y(x_n) > 0$ for all $n$. In view of the above observation, the distance of the point $x_n$ to the decision surface is given by: $$\frac{t_n y(x_n)}{\|w\|} = \frac{t_n (w^T \phi(x_n) + b)}{\|w\|}$$ (2.2) As mentioned before, the margin is given by the perpendicular distance to the closest point $x_n$ from the data set and we need to maximize it. Thus, the maximum margin is given from the optimization of the parameters $w$ and $b$, as follows: $$\arg\max_{w,b} \left\{ \frac{1}{\|w\|} \min_n [t_n (w^T \phi(x_n) + b)] \right\}$$ (2.3) The direct solution to this optimization problem would be very complex since it involves a quadratic function. So, instead of solving the direct problem itself, a solution is to solve a converted, equivalent problem. After rescaling $w$ and $b$, the distance from any point $x_n$ to the decision surface is given by $t_n y(x_n)/\|w\|$, resulting to the canonical representation of the decision hyperplane above: $$t_n (w^T \phi(x_n) + b) \geq 1, \quad n = 1, \ldots, N$$ (2.4) The optimization problem then simply requires that we maximize \( ||w||^{-1} \), which is equivalent to minimizing \( ||w||^2 \), and so we have to solve the optimization problem \[ \arg\min_{w,b} \frac{1}{2}||w||^2 \] (2.5) subject to the constraints given by (2.4). One way to solve quadratic optimization problems is via introducing Lagrange multipliers \( a_n \geq 0 \), with one multiplier \( a_n \) for each of the constraints in (2.4). Hence, this constrained optimization problem converts into the Lagrangian function: \[ L(w, b, a) = \frac{1}{2}||w||^2 - \sum_{n=1}^{N} a_n\{t_n(w^T\phi(x_n) + b) - 1\} \] (2.6) where \( a = (a_1, ..., a_N)^T \). Setting the derivatives of \( L(w, b, a) \) with respect to \( w \) and \( b \) equal to zero, we obtain the following two conditions: \[ w = \sum_{n=1}^{N} a_n t_n \phi(x_n) \] (2.7) \[0 = \sum_{n=1}^{N} a_n t_n \] (2.8) Eliminating \( w \) and \( b \) from \( L(w, b, a) \) using these conditions then gives the dual representation of the maximum margin problem in which we maximize: \[ \tilde{L}(a) = \sum_{n=1}^{N} (a_n) - \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{N} a_n a_m t_n t_m k(x_n, x_m) \] (2.9) with respect to the constraints: \[a_n \geq 0 \quad n = 1, ..., N, \] (2.10) \[\sum_{n=1}^{N} a_n t_n = 0 \] (2.11) Finally, the sign of \( y(x) \) defined by (2.1), can be expressed in terms of the parameters \( \{a_n\} \) as: \[y(x) = \sum_{n=1}^{N} a_n t_n(x, x_n) + b \] (2.12) A constrained optimization of this form satisfies the Karush-Kuhn-Tucker (KKT) conditions, which in this case require that the following three properties hold: \[ \begin{align*} & a_n \geq 0 \quad (2.13) \\ & t_n y(x_n) - 1 \geq 0 \quad (2.14) \\ & a_n \{t_n y(x_n) - 1\} = 0 \quad (2.15) \end{align*} \] Hence, any data point for which \( a_n = 0 \) will play no role in making predictions since it does not appear in the sum in (2.12). On the other hand, the point that doesn’t satisfy the equality, they satisfy \( t_n y(x_n) = 1 \), so they correspond to the points that lie on the maximum margin hyperplanes illustrated in Figure:6. These points are called support vectors, and they play a major role in making the prediction since once the model is trained, a significant proportion of the data points can be discarded and only the support vectors retained. So far, the above observations were based on the strong assumption that the training data points are linearly separable. However, in practice, the class distributions may overlap. In this case, the exact separation of the training data is not possible or leads to poor generalization. In order to deal with this kind of training data, the support vector machine should be modified in a way that it allows misclassification. Misclassification means that data points are allowed to be on the ‘wrong side’ of the margin boundary, but with a penalty that increases with the distance from that boundary. For the subsequent optimization problem, it is convenient to make this penalty a linear function of this distance. The solution is given by slack variables, \( \xi_n = 0 \) where \( n = 1, \ldots, N \), with one slack variable for each training data point. Hence, the data points that are on or inside the correct margin boundary are defined by \( \xi_n = 0 \) and \( \xi_n = |t_n - y(x_n)| \) for the rest of the points. So, if \( \xi_n > 1 \) the point is determined as misclassified, else if \( \xi_n = 1 \) will be on the decision boundary. After taking into consideration the above observation, the classification constraints (2.4) is replaced with: \[ t_n y(x_n) \geq 1 - \xi_n, \quad n = 1, \ldots, N \quad (2.16) \] in which the slack variables are constrained to satisfy \( \xi_n \geq 0 \). Hence, it is now clear that \( \xi_n = 0 \) means the points are correctly classified, \( 0 < \xi_n \leq 1 \). that the point lies inside the margin but on the correct sight of the decision boundary and finally those with $\xi_n > 1$ lie on the wrong side of the decision boundary, as illustrated in Figure 7. Figure 7: Illustration of the slack variables $\xi_n \geq 0$. Data points with circles around them are support vector This is sometimes described as relaxing the hard margin constraint to give a soft margin. Now, the minimization problem (2.5) should take into consideration the penalty for the points that lie on the wrong side of the margin boundary. Consequently: $$ C \sum_{n=1}^{N} \xi_n + \frac{1}{2} ||w||^2 $$ (2.17) where the parameter $C > 0$ controls the trade-off between the slack variable penalty and the margin. The bigger $C$ gets, the harder the margin is, so $C \to \infty$ indicates the earlier support vector machine for separable data. Finally, to minimization of (2.17) subjects to the constraints (2.16) and also $\xi_n \geq 0$. Correspondingly to (2.6) the Lagrangian function takes the following form: $$ L(w, b, a) = \frac{1}{2} ||w||^2 + C \sum_{n=1}^{N} \xi_n - \sum_{n=1}^{N} a_n \{t_n y(x_n) - 1 + \xi_n\} - \sum_{n=1}^{N} \mu_n \xi_n $$ (2.18) where $a_n \geq 0$ and $\mu_n \geq 0$ are Lagrange multipliers. Following the same strategy as earlier, the dual Lagrangian takes the below form: $$ \tilde{L}(a) = \sum_{n=1}^{N} a_n - \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{N} a_na_mt_nt_m(x_n, x_m) \tag{2.19} $$ which is identical to the separable case, except that the constraints are somewhat different. These constraints arise from a combination of the constraints that refer to the Lagrange multipliers and parameter $C$. Therefore, the minimization of 2.19 is subject to: $$ 0 \geq a_n \geq C \tag{2.20} $$ $$ \sum_{n=1}^{N} a_nt_n = 0 \tag{2.21} $$ for $n = 1, ..., N$, where () are known as box constraints. This again represents a quadratic programming problem. As before, a subset of the data points may have $a_n = 0$, in which case they do not contribute to the predictive model (2.12). The remaining data points constitute the support vectors. These have $a_n > 0$ and hence: $$ t_ny(x_n) = 1 - \xi_n \tag{2.22} $$ A rather common problem that support vectors faces is dealing with a non-linear data set where the data can’t be separated by a straight line and, unlike the above case, even by relaxing the margins, the data set the generalization error will still be increased. ![Figure 8: Example of non-linear data set](image-url) The basic idea in kernels is that when a data set is inseparable in the current dimensions, add another dimension by mapping the current feature space in one that could be separable. So, for all \( x_n \) and \( x_m \) in the input space \( X \) there exists certain functions \( k(x_n, x_m) \), can be expressed as an inner product in another space \( V \). The kernel can be written in the form of a "feature map" \( X \rightarrow V \) which satisfies \[ k(x_n, x_m) = \langle \phi(x_n), \phi(x_m) \rangle \nu \] By following the above strategy with changed feature space the dual Lagrangian takes the below form: \[ \tilde{L}(a) = \sum_{n=1}^{N} a_n - \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{N} a_n a_m t_n t_m k(x_n, x_m) \] Concluding, there is a lot of theoretical background behind kernel function, which will not be discussed feather since it is not the subject of this work. 2.2.2 Stochastic Gradient Descent Machine learning models typically have parameters, for SVM simple case is \( w \) and \( b \), and a cost function to evaluate how good a particular set of parameters are. Furthermore, the SVM constraints are linear in the unknowns and any linear constraint defines a convex set. Now, a set of simultaneous linear constraints defines the intersection of convex sets, so SVM constraints can be defined as a convex set. The main property that a convex function has is that a locally optimal point is also globally optimal. Gradient descent is an optimal algorithm used to minimize some function by iteratively moving in the direction of the steepest descent as defined by the negative of the gradient. Hence, gradient descent can be used to update the parameters of our model. Let’s define \( W \) as the model parameters, then the steps to minimize a cost function \( J(W) \) is: 1. Initialize the weights \( W \) randomly 2. Calculate the gradients \( G \) of cost function with respect to parameters. The value of the gradient \( G \) depends on the inputs, the current values of the model parameters, and the cost function. 3. **Update the weights** by an amount proportional to \( G \), \( W = W - \eta G \), where \( \eta \) is the learning rate which determines the size of the steps we take to reach a minimum. 4. Repeat until the cost \( J(W) \) stops reducing. Earlier on this section, it was made clear that in SVM the main goal is to minimizing \( ||w||^2 \). An alternative way to represent the minimization problem is through a simple convex loss function, defined as hinge loss. \[ \ell_{hinge}(t, \hat{t}) = \max(0, 1 - \hat{t}t) \] (2.25) so the minimization problem will have the below form: \[ \min_{w,b} ||w||^2 + C \sum \ell_{hinge}(t_n, w \cdot x_n + b) \] (2.26) Therefore, this form of minimization problem can be solved as a constrained Optimization problem by using gradient descent and the minimization function get the following form: \[ J(W) = \frac{1}{2} w^T w + C \sum \max(0, 1 - t_n w^T x_n) \] (2.27) Furthermore, in real-time applications, the gradient descent can be applied in the scenario of having to take online decisions. Online SVM training means that the classifier changes over time, and the distribution is no longer fixed. Since we need to calculate the gradients for the whole dataset to perform one parameter update, gradient descent can be very slow. On the other hand, **Stochastic Gradient Descent** (SGD) computes the gradient for each update using single training data points \( x_n \) (chosen at a random) or a mini-batch of the training set. The main idea is that the gradient calculated this way is a stochastic approximation to the gradient calculated using the entire training data. Each update is much faster than Gradient Descent and over many updates, the same general direction is given. So, even though a higher number of iterations are required to reach the global minimum, it is still computationally preferred over Gradient Descent. The main difference in the minimization function is that now it corresponds to a single training data points or a certain mini-batch, meaning instead of computing \( J(W) \) we now compute \( J_i(W) \), for \( i = 1, \ldots, K \), and \( K \) the number of the individual points or the number of mini-batches. Finally, since the goal of this work was to train the classifier in an online fashion, Stochastic Gradient Descent was used for the training stage. 2.3 Functional Geometric Monitoring (FGM) Functional Geometric Monitoring, is a method for distributed stream monitoring, which is applicable as Geometric Monitoring, and provides substantial benefits in terms of performance, scalability, and robustness. The strict separation of concerns between distributed systems issues and the monitoring problem, is critically important to anyone wishing to implement distributed monitoring on a general-purpose middle-ware platform. **Geometric Monitoring (GM)** With Geometric Monitoring (GM), an arbitrary global monitoring task can be split into a set of constraints applied locally on each of the streams. The constraints are used to locally filter out data increments that do not affect the monitoring outcome, thus avoiding unnecessary communication. As a result, it enables monitoring of arbitrary threshold functions over distributed data streams in an efficient manner. Practically, as data arrives on the streams, each node verifies that the constraint on its stream has not been violated. The geometric analysis of the problem guarantees that as long as the constraints on all the streams are upheld, the result of the query remains unchanged, and thus no communication is required. If a constraint on one of the streams is violated, new data is gathered from the streams, the query is reevaluated, and new constraints are set on the streams. **Functional Geometric Monitoring (FGM)** Functional Geometric Monitoring, is based on the core ideas of Geometric Monitoring (GM) but instead of a binary constraint, each site is provided with a complex, non-linear function, which, applied to its local summary vector, projects it to a real number. The focus of this subsection is to present the basic principles and protocol of FGM. Practically, let’s assume that there are \( k \) distributed sites, and that at each site, a local stream is generated or collected. The sites collectively monitor the sum of these one-dimensional projections and as long as the global sum is subzero, the monitoring bounds are guaranteed. Let \( S_i(t), i = 1 \ldots k \) denote the local state vectors. Every site communicates with a coordinator where users pose queries on the global stream. The coordinator maintains, for each site \( i \), an estimated state vector \( E_i \). When a flush occurs, the site transmits its drift vector \( X_i(t) = S_i(t) - E_i \), and the coordinator updates \( E_i \) by adding \( X_i \) to it, while the site resets \( X_i \) to 0. Then, the coordinator updates the global estimate \( E = \frac{1}{k} \sum_{i=1}^{k} E_i \). In geometric monitoring, the correctness criterion is described as a geometric constraint, of the form \( S \in \mathbb{A} \), where \( \mathbb{A} \subseteq \mathbb{R}^D \) is the admissible region, that is, the set of global stream states where the constraint holds. The correctness criterion here differs and is based on the \( \psi \) function concept. In FGM algorithm, the \( \psi \) function is a conceptual concept. The system is in a safe state as long as \( \frac{1}{k} \sum_{i=1}^{k} X_i = S \in \mathbb{A} \). To guarantee safety FGM employs a real function \( \phi : \mathbb{R}^D \rightarrow \mathbb{R} \) depending on \( \mathbb{A}, E \) and \( k \). Each site tracks its \( \psi \)-value, \( \phi(X_i) \), as \( X_i \) is updated. Hence, to guarantee that \( S \in \mathbb{A} \), the sign of the sum \( \frac{1}{k} \sum_{i=1}^{k} X_i \), which for now one will be referred as \( \psi \), must be less or equal to 0. So the safety is maintained as long as \( \psi \leq 0 \). In other words, we need to find an \( \mathbb{A}, E, k \)-safe function. The FGM algorithm works in rounds to monitor the threshold condition \[ \sum_{i=1}^{k} \phi(X_i) \leq 0 \tag{2.28} \] and guarantee the desired safety. Generally, the sites perform local updates and, when necessary, ship the \( \phi \) of there local drift to the coordinator, which is responsible for monitoring the threshold condition. On the other hand, the coordinator is collecting information from the workers to compute \( \psi \) and if the safety is disturbed, \( \psi > 0 \), the coordinator requests the real drift vectors to recompute global estimate and restore systems safety. In this way the system stays updated with the minimum communication cost. Later on we will describe the execution of the FGM algorithm, specifically for distributed, online Support Vector Machine training. ### 2.4 Tools Given the nature of the problem, there is no doubt that the environment must include tools that allow distributed computing and also support complex workflows, such as machine learning algorithms. An additional factor is the need to produce a project as simple and efficient possible. Therefore, Python became the more prevalent choice among others, for being an interpreted, high-level programming language with dynamic semantics. Also, Dask is lightweight python library that provides distribution and can be easily combined with the Scikit-Learn, which contains a lot of efficient tools for machine learning and statistical modeling. 2.4.1 Scikit-Learn Scikit-Learn [13] is a Python module integrating classical machine learning algorithms in the tightly-knit world of scientific Python packages. It is designed to provide simple and efficient solutions to learning problems, which is one of the main purposes of this project. Also, provides a wide range of choices, for all the kind of machine learning problems, such as classification, regression, clustering, and more. This thesis is focused on classification problems and especially on Support Vector Machines. So, from Scikit-Learn, SGDClassification estimator fits the specifications of the problem since is a simple approach to discriminative learning of linear classifiers under convex loss functions, such as SVM. The previous section described in detail Stochastic Gradient Descent and hinge loss, so practically choosing hinge as loss function for this estimator, automatically results in a linear SVM classifier. Also, to test the implementation, there was a need for several datasets with different features and sizes. Scikit-Learn includes various random sample generators that can be used to build artificial datasets of controlled size and complexity such as make classification. 2.4.2 Dask Dask [4] can be considered as Hadoop/Spark equivalent for python. It is a simple tool for data processing and is able to either be run on a local computer or be scaled up to run on a cluster. It is an easy-to-use tool since it provides advanced parallelism for analytics, enabling performance at scale with minimal rewriting. The past few years Dask has been extended with a distributed memory scheduler. This enables Dask’s existing parallel algorithms to scale across 10s to 100s of nodes, and support distributed computing. In addition to the above, another essential feature, for this project, is that workers can communicate with each other to share data. This removes central bottlenecks for data transfer and offers the opportunity to apply the FGM protocol. 2.4.3 Scikit-Learn and Dask Connection Scikit-learn uses joblib for single-machine parallelism. This supports training most estimators (anything that accepts an n_jobs parameter) using all the cores of your laptop or workstation. Alternatively, Scikit-Learn can use Dask for parallelism. This offers the ability to train those estimators using all the cores of your cluster without significantly changing code. Considering, this connection and the features each of them individually provides, there is no doubt that they get above the specifications described earlier on. 3 Implementation This section describes the basic FGM protocol (without rebalancing) for distributively training SVM, in an online manner. First, there is a description of the final model that combines SVM-FGM protocol focused on the way that these two algorithms were combined to produce rather encouraging results and then a description of the main tools used for this implementation. The distributed SVM algorithm is based on the averaging model where the global estimate is computed as the average of the individual models computed in each node. Given this structure, FGM applies on when the nodes will send their local model to make the communication as meaningful as possible. Later follows a more thorough description which will help to understand the results in section 4. 3.1 Decentralized architecture It must be clear by now, that the main purpose of this work is to empirically confirm the communication gains for Distributed SVM via FGM protocol. In the general case, Distributed SVM structure constructs from one coordinator and multiple, n, workers. Each worker performs online training, in order to update its local model and after each update ships the updated model to the coordinator. Then, the coordinator computes the estimated model from the average of the models received (Figure 9). Figure 9: Distributed SVM Hence, the goal is to prevent the enormous cost of constantly shipping the whole model and let the workers communicate when the model is truly outdated using the FGM protocol. 3.2 Safe Function Starting the analysis of the FGM algorithm, the safe function is a conceptual concept. The configuration of the system of k sites is a (kD)-dimensional vector consisting of the concatenation of the k local drift vectors, \( X_i \). The system is in safe state as long as \[ E + \frac{\sum_{i=1}^{k} X_i}{k} = S \in A \] To guarantee that a configuration is safe, FGM employs a real function \( \phi : \mathbb{R}^D \to \mathbb{R} \), depending on \( A, E \) and \( k \). Each site tracks its \( \phi \)-value, \( \phi(X_i) \), as \( X_i \) is updated. System safety is guaranteed by tracking the sign of the sum \( \psi = \sum_{i=1}^{k} \phi(X_i) \). In particular, we need to guarantee that \( \psi \leq 0 \) implies \( S \in A \). There was a need to select the one safe function that fits the criteria of the SVM training. One of the simplest \( (A, E, k) \)-safe functions for this kind of monitoring is: \[ \phi(x) = \max\{-\epsilon||E|| - x \frac{E}{||E||}, ||x + E|| - (1 + \epsilon)||E||\} \] \hspace{1cm} (3.2) which is based on the distance of the two vectors \( E, x \) (where \( E \) is represents the global estimate and \( x \) the drift vector) regarding a threshold \( \epsilon \). ### 3.3 Basic FGM Protocol for Learning #### FGM Protocol The FGM protocol works in rounds to monitor the threshold condition \[ \sum_{i=1}^{k} \phi(X_i) \leq 0 \] \hspace{1cm} (3.3) to detect where the local training models of each worker has changed significantly to perform communication. **Execution of rounds** At the beginning of a round, the coordinator knows the current state of the system \( E = S \), selects a \((A,E,k)\)-safe function \( \phi \) as defined above. Note that \( \psi = \sum_{i=1}^{k} \phi(X_i) \), at each point in time. Therefore, the FGM protocol steps are: 1. Coordinator ships \( \phi \) to every worker (or just \( E \)) and local workers initialize their drift vectors to 0. Hence, initially \( \psi = k\phi(E) \), where \( k \) is the number of workers, and subrounds begin. 2. A number of subrounds is initiated to be described below. When all the subrounds ends, meaning that local training models changed in a way that \( \psi > e_{\psi}k\phi(0) \), where \( e_{\psi} \) is related to the desired quantization for monitoring \( \psi \). (for this work \( e_{\psi}=0.01 \)) 3. Finally, coordinator ends the rounds by collecting all the drift vectors and updates \( E \). **Execution of subrounds** The purpose of the subrounds is to monitor the condition \( \psi \leq 0 \) coarsely, with a precision of roughly \( \theta \), performing as little communication as possible. Subrounds are executed as follows: 1. The coordinator knows the value $\psi$ and computes the subround’s quantum $\theta = -\frac{\psi}{2k}$ and ships $\theta$ to each local worker. Furthermore, the coordinator initializes a counter $c = 0$. Now, each worker records its initial value $z_i = \phi(X_i)$, where $2k\theta = -\sum_{i=1}^{k} z_i$ and initializes a counter $c_i = 0$. 2. Each local worker $i$ maintains its local drift vector $X_i$, as it updates the local model by performing partial_fit over a mini-batch of data. When $X_i$ is updated, worker $i$ updates its local counter as follows: $$c_i := \max\{c_i, \left\lfloor \frac{\phi(X_i) - z_i}{\theta} \right\rfloor \}$$ \hspace{1cm} (3.4) If this update increases the counter, the local worker sends a message to the coordinator, with the increases to $c_i$. 3. When the coordinator receives a message with a counter increment from some worker, it adds the increment to its counter $c$. If $c$ exceeds $k$, the coordinator finishes the subround by collecting all $\phi(X_i)$ from all local workers, recomputing $\psi$. If $\psi \geq e_\psi k\phi(0)$, the subrounds end, else another subround begins. During the execution of a subround, if $c \leq k$ then $\sum_{i=1}^{k} \phi(X_i) < 0$. Note that, in FGM protocol there are two kinds of communications: *Downstream communication* which consists of messages from local nodes to the coordinator and *upstream communication* which consists of messages from the coordinator to local nodes. ### 3.4 SVM-FGM protocol The implementation of SVM-FGM protocol is based on sklearn and Dask. Although sklearn is a library used for a centralized machine learning task, was adapted in a distributed online training scenario with the use of Dask. Note that the classic fit function that sklearn provides couldn’t be used since is an online SVM algorithm and fit can’t support multiple fitting on the same model. Fortunately, the SGDClassifier estimator provides partial_fit which is specifically used for this kind of task since it allows fitting the same model multiple times with different training samples. The SGDClassifier estimator implements regularized linear models with stochastic gradient descent learning including SVM and it was preferred over other sklearn estimators for SVM for providing partial fitting. On the other hand, Dask is in charge of the distribution. Dask initializes a distributed cluster with the given number of workers and threads for each worker. Workers can directly exchange messages with the Dask scheduler through build-in communication but this is not the case for communication between workers. Since the FGM coordinator is implemented as a dask worker, there was a need for a worker to worker communication which has been provided by Dask Pub-Sub pattern. Later on, follows a thorough explanation for both partial_fit and worker-to-worker communication to finally present the SVM-FGM protocol implementation. **Partial Fit** The SGDClassifier estimator implements regularized linear models with stochastic gradient descent learning and includes `partial_fit()` function to support online machine learning. This function performs one epoch of stochastic gradient descent on given samples, so there is no guarantee that the minimum of the cost function is reached. On the other hand, `partial_fit` will correct the model one step towards as new data arrive. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance and numerical stability overhead, hence as the experiments also indicate, `partial_fit` is not crucially increasing the training time when the batch size increases significantly. The above figure (10) illustrates the time that one `partial_fit` needs for different sizes of mini-batches. Worker-to-worker communication Worker-to-worker communication is the communication directly between workers without involving the scheduler at all. So, Dask implements the Publish-Subscribe pattern, providing an additional channel of communication between ongoing tasks. This allows workers to directly communicate data between each other with a typical Publish-Subscribe pattern. It involves two components, Pub objects, into which we put data and Sub objects, from which we collect data. The dask workers submit 2 different kind of jobs, the coordinator function and the worker function. Note that dask sees the coordinator as a worker, so the communication between workers and coordinator can be simply performed via Publish-Subscribe pattern. In particular, coordinator creates a subject for each kind of messages needed for the upstream and downstream communication. For upstream communication messages we have one publisher (coordinator) and multiple subscribers (workers), where for the downstream com- munication multiple publishers (workers) and one subscriber (coordinator). So the publisher/publishers simply pushes a message into the pub-sub buffer created for this kind of messages (each subject is a different buffer), and the relevant subscriber/subscribers receives it by getting the first element of the buffered messages. Generally, pub-sub follow the FIFO concept, so the first message pushed into the buffer is also the first one out. For now when upstream and downstream communication is referred, it indicates the Pub-Sub communication. ![Figure 11: Illustration of Publish-Subscribe pattern](image) **Setup** Applying the FGM protocol in the distributed SVM communication, results in a new distributed structure that includes downstream and upstream communication between the workers and the coordinator as illustrated in (Figure 12). Therefore, the system consists of multiple $k$ workers and one coordinator that orchestrates them. At the beginning of the implementation a Dask distributed cluster is created with $k+1$ workers and one thread per worker. A large training dataset is split into $m$ equal parts (in this case $m=100$) and each of the chunk is randomly assigned to one worker, such that all the workers come up with the same number of chunks. Furthermore, the workers read the first chunk, assigned to them, and splits it into smaller parts, mini-batches. Then, the coordinator requests that each and every worker ”warms up” by training their local classifier with the first mini-batch, in order to initialize their models. Workers sends back to the coordinator their parameters and the coordinator computes the global estimate. After the ”warm up” phase, the FGM protocol is applied. Practically, each worker distributively trains his local model, given a randomly assigned chunks from the original dataset by using partial_fit. As the FGM protocol requires, while the local model changes, the worker updates its local counter as given in (3.4), and if this update increases the counter, the worker sends a message to the coordinator (downstream), with the increase. The coordinator receives a message with a counter increment from some worker, and it adds the increment to its counter $c$. If $c$ exceeds $k$, the coordinator requests (upstream) from the workers to compute the safe func- tion given the local drift, $\phi(X_i)$. Workers perform upstream communication to send the $\phi(X_i)$ to the coordinator. The coordinator monitors the threshold condition of the sum of the $\phi(X_i)$ and, if a violation occurs, request the local drifts in order to recompute the global estimate which then broadcasts to all the workers. The workers receives the global estimate, they initialize their model and continue with the training procedure. Overall, only if the local model changes significantly the system performs costly communication. All the above can be described by Algorithm 3.4. Note that $S_i$ is the local model and $X_i$ the drift vector. Also the ml parameters are the coefficients (weights) and the intercept (bias), as defined by the SVM algorithm. Algorithm 1 SVM-FGM Initialization at the coordinator 1. Starting k workers 2. Subscribe to all the subject into which the workers will Publish 3. Warming up the global clf and end up with parameters coef, interc 4. $E \leftarrow [\text{coef}, \text{interc}]$, $c \leftarrow 0$, $\psi \leftarrow k\phi(E)$, $\theta \leftarrow \frac{\psi}{k} 5. \textbf{Publish} E$ and $\theta$ to all workers and start the first round A. Worker i on receiving $E$ and $\theta$ at the start of a new round: 1. update the local model: $S_i \leftarrow E$ 2. quantum $\leftarrow \theta$, $c_i \leftarrow 0$, $z_i \leftarrow \phi(X_i)$ B. Worker on receiving $\theta$ at the start of a new subround: 1. $c_i \leftarrow 0$, quantum $\leftarrow \theta$, $z_i \leftarrow \phi(X_i)$ C. Worker i on observing data at time t: 1. load mini-batch $batch_i$ 2. if $batch_i \neq \text{None}$ then 3. update the local model $S_i \leftarrow S_i - E$ 4. compute drift $X_i = S_i - E$ 5. $\text{ObservedBatches} \leftarrow \text{ObservedBatches} + 1$ 6. compute $c_{i, \text{new}} := \max\{c_i, \lfloor \frac{\phi(X_i) - z_i}{\theta} \rfloor\}$ 7. if $c_{i, \text{new}} > c_i$ then 8. $Increment_i \leftarrow c_{i, \text{new}} - c_i$ 9. $c_i \leftarrow c_{i, \text{new}}$ 10. $\textbf{Publish} Increment_i$ to the coordinator 11. end if 12. end if D. Coordinator on receiving an increment: 1. $c \leftarrow c + Increment_i$ 2. if $c > k$ then 3. request and collect all $\phi(X_i)$ from all workers 4. $\psi \leftarrow \sum_{i=1}^{k} \phi(X_i)$ 5. if $\psi \leq k\phi(E)$ then 6. request and collect all $X_i$ from all workers 7. $E \leftarrow E + \frac{1}{k} \sum_{i=1}^{k} X_i$ 8. update $k \leftarrow \text{number of pending workers}$ 9. $c \leftarrow 0$, $\psi \leftarrow k\phi(E)$, $\theta \leftarrow \frac{\psi}{k}$ 10. $\textbf{Publish} E$ and $\theta$ to all workers and start a new round (code A) 11. else 12. $c \leftarrow 0$, $\theta \leftarrow \frac{\psi}{k}$ 13. $\textbf{Publish} \theta$ to all workers to start a new subround (code B) 14. end if 15. end if 4 Experimental Results This section presents the practical performance of SVM-FGM protocol, in a real distributed environment. The main goal is to empirically confirm the performance gains against a centralized (sequential) structure. 4.1 Datasets The datasets were generated with the use of sklearn random sample generators. These generators can be used to build artificial datasets of controlled size and complexity. Since the main subject is to test classification over a dataset, function make_classification was used. This function generates a random n-class classification problem given parameters that specify the size and the nature of the problem and balance and the desired noise and balance. The system was tested with 3 different datasets with the same number of samples and features but different percentage of noise. The noise is determined by using different values for flip_y parameter which gives the fraction of samples whose class is assigned randomly. Larger values introduce noise in the labels and make the classification task harder. Dataset 1 The dataset used for this experiment has the following parameters: - number of samples: 30000 - number of features: 1000 - weights: 50-50 (balanced dataset) - flip_y 0 Dataset 2 The dataset used for this experiment has the following parameters: - number of samples: 30000 - number of features: 1000 - weights: 50-50 (balanced dataset) - flip_y: 0.1 3. Dataset The dataset used for this experiment has the following parameters: - number of samples: 30000 - number of features: 1000 - weights: 50-50 (balanced dataset) - flip_y: 0.2 4.2 Results The presentation was illustrated via jupyter notebook and pyplot library. It’s important to mention that the Pub-Sub structure requires a small timeout penalty when receiving messages. The following experiments show that the time and hence the speedup is not proportional to the number of workers since when the number of workers increases so does the communication, leading to a bigger timeout penalty. Also, one of the parameters that affect the communication, hence the performance of the distributed architecture is the threshold, \( e \), that changes the sensibility on the changes that occurred at the local model. Higher values of \( e \) make the protocol more resistant to local model changes, so the communication occurs more rarely, where on the other hand lower values of \( e \) makes it more sensible resulting in frequent communication. Figure (13) illustrates how the communication and time changes with different threshold values. Figure 13: Rounds/Time distributed for different threshold values It is clear that really small values of threshold results in a centralized like behavior and extremely high communication. For now on the selected threshold is a medium threshold value, $e=0.3$, to illustrate the average case scenario. Below there is an illustration of the performance of the system when a different number of workers is chosen. - Dataset 1 Figure 14: 1. An illustration of the performance of the SVM-FGM protocol for different number of workers, 2. The accuracy for different number of workers. Both for 2 passes of the same dataset without noise - Dataset 2 Figure 15: 1. An illustration of the performance of the SVM-FGM protocol for different number of workers, 2. The accuracy for different number of workers. Both for 2 passes of the same dataset with noise 0.1 - Dataset 3 Figure 16: 1. An illustration of the performance of the SVM-FGM protocol for different number of workers, 2. The accuracy for different number of workers. Both for 2 passes of the same dataset with noise 0.2. To count the speedup of the system, we added the time for the second pass to the first one to get the total time needed for 2 passes and compared it with the total time that the centralized architecture needs. So, below is illustrated the speedup for the 3 different noise states. - Dataset 1 Figure 17: An illustration of the speedup for different number of workers with no noise - Dataset 2 Figure 18: An illustration of the speedup for different number of workers with 0.1 noise - Dataset 3 Figure 19: An illustration of the speedup for different number of workers with 0.2 noise The second pass is used in order to reach a better, much smoother accuracy, that converges to the maximum accuracy for this dataset. Also, the total time for 2 passes doesn’t affect much the speedup since the total time for 2 passes in a distributed manner results also to a superlinear speed towards 1 pass of the centralized. Note that the number of nodes doesn’t affect much the total time of the distributed system. This due to the almost standard overhead that Dask with Pub-Sub pattern adds to the system. The above figure illustrates the time that coordinator needs for different phases, - start time: the time needed to begin the process of the round and warm up. - wait time: the time that the coordinator processes a small amount of bits, and mainly waits for a significant change to occur - process time: the time that the coordinator need to collect information from the workers and perform computation. The table that illustrate the times for each number of workers: - Dataset 1 Figure 20: The average times for 15 runs for each number of workers. and the corresponding plot: Figure 21: The average times for 15 runs for each number of workers. There are tree plots one for each phase. - Dataset 2 Figure 22: The average times for 15 runs for each number of workers. The corresponding plot: ![Figure 22: The average times for 15 runs for each number of workers.](image) Figure 23: The average times for 15 runs for each number of workers. There are three plots one for each phase. - Dataset 3 As illustrated in the above plot, \textit{wait\_time} decreases when the number of workers increase due to the reduced number of chunks that each worker has to process. Apparently, even thought the \textit{wait\_time} decreases the \textit{process\_time} for the subrounds and the \textit{process\_time} for rounds increase due to the work load that the coordinator needs to handle when processing the messages. The experimental results indicated that in this work, with the use of functional geometric monitoring method we both achieved superlinear speedup and centralized like accuracy, with a minimum communication cost. Also indicates that sklearn can perform equivalently in a distributed system when using Dask and averaging model. The average cases that performs better in most case includes the use of 0.3 threshold and workers. **Superlinear Speedup** The case when the parallel execution of a persistent algorithm can obtain a superlinear speedup due to utilizing more cache memory. Since more cache memory is used in parallel execution, for some region of problem size, it can store the whole problem size, while the sequential execution cannot. In this case, the supper-linear speedup may occur cause the overlap of CPU/IO that the distributed architecture performs doesn’t occur in the centralized structure, resulting to a much lower execution time than the minimum expected (Centralized time execution \( t_{c, \text{ent}} / \text{per number of workers} : t_{c, \text{ent}} / k \)) ## 5 Conclusions Distributed Support Vector Machine is mainly solved by a sort of distributed chunking technique where the result of the training procedure is combined by using an averaging protocol. Although previous approaches resulted in a centralized-like efficiency, none of them takes into consideration the communication cost of a distributed structure. This work proposed an SVM-FGM averaging protocol that achieves high predictive performance, yet requires substantially less communication than any other contemporary static or distributed averaging SVM protocol. In addition to the reduced communication, experimental results showed that a significantly high speedup, in fact super-linear speedup, has been accomplished establishing that SVM-FGM protocol is indeed suitable for real-time application. Although SVM-FGM protocol achieved high performance, it still has plenty of room for improvement. First of all, a interesting work will be to achieve an inversely proportional relation between time and number of workers, so that the system performs better for larger systems. Additionally, machine learning includes a wide range of different algorithms that correspond to different problems each time. Hence, a very interesting study could be to practically test other distributed machine learning algorithms and evaluate the results to test machine learning - FGM applicability. References [10] A. Navia-Vázquez et al. “Distributed Support Vector Machines.pdf”. In:
{"Source-Url": "https://dias.library.tuc.gr/view/manf/86471", "len_cl100k_base": 14348, "olmocr-version": "0.1.53", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 95922, "total-output-tokens": 17912, "length": "2e13", "weborganizer": {"__label__adult": 0.0003502368927001953, "__label__art_design": 0.0006418228149414062, "__label__crime_law": 0.0003848075866699219, "__label__education_jobs": 0.0033588409423828125, "__label__entertainment": 0.00016736984252929688, "__label__fashion_beauty": 0.00022971630096435547, "__label__finance_business": 0.0005135536193847656, "__label__food_dining": 0.0004341602325439453, "__label__games": 0.0008349418640136719, "__label__hardware": 0.0015659332275390625, "__label__health": 0.0009965896606445312, "__label__history": 0.00046181678771972656, "__label__home_hobbies": 0.00020229816436767575, "__label__industrial": 0.0007548332214355469, "__label__literature": 0.00042891502380371094, "__label__politics": 0.0004286766052246094, "__label__religion": 0.0006346702575683594, "__label__science_tech": 0.42333984375, "__label__social_life": 0.00020325183868408203, "__label__software": 0.0191650390625, "__label__software_dev": 0.5439453125, "__label__sports_fitness": 0.00034928321838378906, "__label__transportation": 0.0005130767822265625, "__label__travel": 0.00025200843811035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65998, 0.04955]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65998, 0.63858]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65998, 0.88507]], "google_gemma-3-12b-it_contains_pii": [[0, 156, false], [156, 1428, null], [1428, 2105, null], [2105, 3543, null], [3543, 5916, null], [5916, 7898, null], [7898, 9484, null], [9484, 10234, null], [10234, 11162, null], [11162, 11964, null], [11964, 13412, null], [13412, 16103, null], [16103, 17420, null], [17420, 19586, null], [19586, 21098, null], [21098, 22579, null], [22579, 24967, null], [24967, 26146, null], [26146, 27451, null], [27451, 29503, null], [29503, 31470, null], [31470, 33797, null], [33797, 36539, null], [36539, 38951, null], [38951, 40835, null], [40835, 41951, null], [41951, 43718, null], [43718, 45974, null], [45974, 47492, null], [47492, 48503, null], [48503, 49354, null], [49354, 50827, null], [50827, 51601, null], [51601, 53647, null], [53647, 55073, null], [55073, 56221, null], [56221, 56648, null], [56648, 57090, null], [57090, 57594, null], [57594, 57798, null], [57798, 58884, null], [58884, 59106, null], [59106, 59405, null], [59405, 60035, null], [60035, 62298, null], [62298, 64117, null], [64117, 65998, null]], "google_gemma-3-12b-it_is_public_document": [[0, 156, true], [156, 1428, null], [1428, 2105, null], [2105, 3543, null], [3543, 5916, null], [5916, 7898, null], [7898, 9484, null], [9484, 10234, null], [10234, 11162, null], [11162, 11964, null], [11964, 13412, null], [13412, 16103, null], [16103, 17420, null], [17420, 19586, null], [19586, 21098, null], [21098, 22579, null], [22579, 24967, null], [24967, 26146, null], [26146, 27451, null], [27451, 29503, null], [29503, 31470, null], [31470, 33797, null], [33797, 36539, null], [36539, 38951, null], [38951, 40835, null], [40835, 41951, null], [41951, 43718, null], [43718, 45974, null], [45974, 47492, null], [47492, 48503, null], [48503, 49354, null], [49354, 50827, null], [50827, 51601, null], [51601, 53647, null], [53647, 55073, null], [55073, 56221, null], [56221, 56648, null], [56648, 57090, null], [57090, 57594, null], [57594, 57798, null], [57798, 58884, null], [58884, 59106, null], [59106, 59405, null], [59405, 60035, null], [60035, 62298, null], [62298, 64117, null], [64117, 65998, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65998, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65998, null]], "pdf_page_numbers": [[0, 156, 1], [156, 1428, 2], [1428, 2105, 3], [2105, 3543, 4], [3543, 5916, 5], [5916, 7898, 6], [7898, 9484, 7], [9484, 10234, 8], [10234, 11162, 9], [11162, 11964, 10], [11964, 13412, 11], [13412, 16103, 12], [16103, 17420, 13], [17420, 19586, 14], [19586, 21098, 15], [21098, 22579, 16], [22579, 24967, 17], [24967, 26146, 18], [26146, 27451, 19], [27451, 29503, 20], [29503, 31470, 21], [31470, 33797, 22], [33797, 36539, 23], [36539, 38951, 24], [38951, 40835, 25], [40835, 41951, 26], [41951, 43718, 27], [43718, 45974, 28], [45974, 47492, 29], [47492, 48503, 30], [48503, 49354, 31], [49354, 50827, 32], [50827, 51601, 33], [51601, 53647, 34], [53647, 55073, 35], [55073, 56221, 36], [56221, 56648, 37], [56648, 57090, 38], [57090, 57594, 39], [57594, 57798, 40], [57798, 58884, 41], [58884, 59106, 42], [59106, 59405, 43], [59405, 60035, 44], [60035, 62298, 45], [62298, 64117, 46], [64117, 65998, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65998, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
c0e2dd8cbabaf3ac0093f54c036dab6790aa64fb
Requirements and Specifications for Adaptive Security: Concepts and Analysis T. T. Tun†, M. Yang*, A. K. Bandara†, Y. Yu†, A. Nhlabatsi§, N. Khan§, K. M. Khan§, B. Nuseibeh†‡ † School of Computing & Communications, The Open University, Milton Keynes, UK * Department of System Management & Strategy, University of Greenwich, London, UK § Computer Science and Engineering, KINDI Computing Research Centre, Qatar University, Qatar ‡ Lero - The Irish Software Research Centre, University of Limerick, Limerick, Ireland ABSTRACT In an adaptive security-critical system, security mechanisms change according to the type of threat posed by the environment. Specifying the behavior of these systems is difficult because conditions of the environment are difficult to describe until the system has been deployed and used for a length of time. This paper defines the problem of adaptation in security-critical systems, and outlines the RELAIS approach for expressing requirements and specifying the behavior in a way that helps identify the need for adaptation, and the appropriate adaptation behavior at runtime. The paper introduces the notion of adaptation via input approximation and proposes statistical machine learning techniques for realizing it. The approach is illustrated with a running example and is applied to a realistic security example from a cloud-based file-sharing application. Bayesian classification and logistic regression methods are used to implement adaptive specifications and these methods offer different levels of adaptive security and usability in the file-sharing application. CCS CONCEPTS • Security and privacy; • Software and its engineering; KEYWORDS Security requirements, Self-adaptation 1 INTRODUCTION Requirements for software systems are rooted in an environment characterized by complex runtime conditions. In adaptive software systems, some of the runtime conditions are difficult to predict and approximate until the system has been deployed and used for a length of time. This poses a challenge for requirements engineering because incomplete knowledge of the environment can lead to incorrect specifications and violation of critical requirements at runtime [27]. For example, consider the class of security attacks known as parameter tampering where an attacker modifies the HTTP parameter values sent from the client computer to the server computer in order to escalate privileges. Much of the existing work on detecting and preventing parameter tampering attacks (such as [22] and [14]) uses domain-specific constraints by validating the parameter values both at the client side and the server side. For example, in order to prevent unauthorized "discount" attacks in e-commerce applications, defense mechanisms against parameter tampering ensure that the values for the quantity parameter are never negative when received by the server. These constraints are typically specified and implemented before the system becomes operational. This paper considers cases where it is not possible to write rules for detecting parameter tampering at design time. For example, when various subsets of a group of users share documents using a web-based file sharing service (such as Dropbox, ownCloud etc), one security requirement is to prevent self-inviting, namely that a user should not be allowed to invite himself to a shared folder he does not own. If an attacker is able to intercept a share invitation and add his own ID as an invitee via parameter tampering (this attack scenario is discussed further in Section 2.1), it is difficult to specify at design time the necessary behavior to prevent the attack because the identities of the attacking and targeted users are yet unknown. Therefore, design time constraints are largely ineffective for addressing the class of security problems this paper is considering. In order to address the problem, this paper proposes an approach for specifying adaptive security behavior under partial knowledge of the system environment and for incorporating further knowledge about the environment discovered at runtime into adaptive specification. In doing so, this paper makes the following three contributions. First, the definition and illustration of a notion of user-driven adaptation which allows for incomplete knowledge of the environment but the software and parts of its environment cooperate at runtime to adapt appropriately (Section 2). Conceptually, requirements are divided into episodic and run requirements, and informally, adaptation means the modification of episodic behavior in order to satisfy run requirements (Section 3). Second, the implementation of the proposed notion of adaptation at runtime by means of input approximation: whenever the system receives an input for which the required output (as demanded by requirements) is not certain, there is a need for adaptation. The adaptation behavior is identified by finding a similar input value for which there is a required output value, and to produce that output value (Section 3). The paper proposes an architecture and a method for implementing input approximation using machine learning techniques. Third, the demonstration of how parameter tampering attacks against the file sharing application ownCloud can be identified and prevented at runtime using statistical machine learning methods in a scalable fashion (Section 4). In particular, the evaluation of Bayesian and logistic regression methods for identifying possible self-invites indicates that the higher degree of security (fewer missed attacks) comes at the cost of reduced convenience (more false alarms). There is a substantial body of work on expressing and analyzing requirements for adaptation. For instance, Whittle et al. [25, 26] have proposed RELAX, which supports a fuzzy logic-based approach to analyzing uncertainty in adaptive systems. Ghezzi et al. [5] have examined the problem of when to introduce a new controller to a running system without restarting it, and verifying that the new controller meets its specification. Souza et al. [23] have proposed a method for weakening requirements as a way to adapt to the changing context. Existing work on partial model-based reasoning such as [19] has shown how uncertainty in model refinement can be reduced by defining conditions that need to be met during model transformation. The proposed approach is different from the existing work on requirements and specifications for adaptation in two different ways. First, current approaches tend to view adaptation as a way of avoiding or preventing requirement violations based on design time, and perhaps runtime, knowledge of the environment. While such approaches are important, this paper takes the view that adaptation should also be in response to requirement violations encountered at runtime without weakening the requirements. Second, adaptation tends to be regarded essentially as an optimization problem in existing approaches, where the objective is to choose a system behavior that gives the highest possible level of requirements satisfaction. This paper posits that adaptation is not just as an optimization problem, but also a problem of collaboration between parts of the environment and the software: typically users in the environment are able to guide the software to adapt to a certain desired behavior. The rest of the paper is organized as follows. Section 2 introduces a running example and the adaptation problem that motivates our work, together with the notions of requirements for security and adaptation. Section 3 introduces the proposed approach and illustrates its features with the running example. Section 4 presents an evaluation of the use of statistical machine learning approaches to implementing an adaptive specification for preventing the parameter tampering attacks. Related work is discussed in Section 5 and concluding remarks are given in Section 6. 2 PRELIMINARIES This section introduces the running example, recalls the Problem Frames notation, and discusses the adaptation problem. 2.1 Parameter Tampering Attacks The security problem discussed here is related to several related types of security attacks. One well-known type of attack is called the parameter tampering attack [22] where an attacker intercepts and modifies data (which has been usually validated on the client side) in order to gain unauthorized privileges on the server. Another type of attack is called cookie poisoning [12], where an attack modifies the values stored on the web browser in order to bypass checks or escalate privileges. There are also various types of code injection attacks [1] where it is the code rather than the data that has been inserted into communication in order to gain unauthorized access (for alternative names and real-world examples of these attacks, see [13]). Scenario: In the scenario we are considering Alice, Bob, Sam and Tom work in an organization (Fig. 1). Different subgroups of them work on different projects, and they share documents using ownCloud, a cloud-based file sharing system. The lead person of the project creates a project-specific folder (inviter) and invites those who work on the project to share that folder (invitees). One security requirement of this application is that only those invited by the invitee are able to view the shared folder. Attack: Bob knows that Alice, Sam, and possibly Tom also will be working on a secret project in the near future. He knows that he will not be invited to share the documents in the project folder, but he wants to see them nevertheless. So he writes a program that examines all HTTP requests on their office router to perform a parameter tampering attack as follows (Fig. 2). Bob’s program running on the router looks for all HTTP requests for ownCloud document sharing where the inviter is Alice and invitees include Sam, and the program adds his own name to the invitee list. It means that whenever Alice invites Sam to join a folder, Bob also gets an invite, although it is not sent by Alice. Following much of the existing work on detecting and preventing parameter tampering attacks (such as [22] and [14]), the scenario above assumes that the attacker can read the communication between the users and the server. In practice, this assumption can hold for a number of reasons including: (i) the web application does not use TLS/SSL, (ii) the attacker has compromised the router (through DNS hijacking for example), and (iii) the attacker has created an “evil twin” wifi access point [15]. In a parameter tampering attack, an attack may modify parameter values, field names, and the sequence of values communicated between the client and the server [22]. It is noted that in parameter tampering attacks, the attacker may have legitimate interactions with the system, and the attack itself may be disguised as one. The general solution to parameter tampering attacks is to check and enforce static data integrity rules, both on the client side (typically via JavaScript programs for validating form data) and the server side (typically through code analysis [12]). This solution, however, does not apply in our ownCloud example: since Alice may... sometimes want to share with Bob certain folders, Bob cannot be removed from invitee lists syntactically. Therefore JavaScript-based approach will work only if the information Bob inserts into the request string has distinct characteristics, such as the email address Bob uses in the attack being different from the one he uses for legitimate sharing. The problem here is that given a share request containing a group of invitees, we want to find out how likely is it that one of the invitees is an attacker (self-invited invitee). **Definitions:** Let \( U \) be the finite set of users \( \{a, b, s, t\} \) (abbreviations of Alice, Bob, Sam and Tom) who may create and share documents and folders using ownCloud. An inviter sends an invite to have other users to allow them access to a folder (for simplicity, we will leave out details about the folders being shared). The set of invites \( \text{Invites} \) is defined as a set of pairs of inviter \( (er) \) and some invitees \( (ee) \). \[ \text{Invites} = \{(er, ee) \mid (er, ee) \in U \times 2^U \land er \neq ee \land ee \neq \text{null}\} \] Notationally, we will write an invite as comma separated variables where the inviter has an overline. So, the request where Alice invites Sam, Tom, that is the relation \( (a, [s, t]) \), is simplified as \( \overline{a}, s, t \). When the server receives such an invite, all invitees are notified and they are able to access the shared folder (without having to accept the invitation to share). We assume that the inviter cannot be one of the invitees. Similarly, an inviter sends an uninvite to have someone removed from the invitee list of a folder. \[ \text{Uninvites} = \{(er, ee) \mid (er, ee) \in U \times U\} \] Again, we will write \( \overline{a}, \sim b \) to say Alice uninvites Bob. ### 2.2 The Problem Frames Notation Fig. 3 shows the ownCloud sharing problem using the Problem Frames notation [8] and the diagram should be read as follows. At the interface \( c \), the phenomena description \( \text{ER}!\{\text{Send}(\text{invite}), \text{Send}(\text{uninvite})\} \) means that the problem domain Inviter controls the events \( \text{Send}(\text{invite}) \) and \( \text{Send}(\text{uninvite}) \), and these events are observed by the problem domain Office Router. Similarly, at the interface \( d \), Office Router controls the events \( \text{Send}(\text{invite}) \) and \( \text{Send}(\text{uninvite}) \) that are observed by ownCloud Server, and the events \( \text{Add}[U] \) and \( \text{Remove}[U] \) controlled by ownCloud Server are observed by Office Router. Events at the interface \( d \) can be read in the same way. The behavior of Office Router is to send an invite request to ownCloud Server whenever a request is received. The behavior of Inviter is such that when they want to share a folder with some other users (share request \( [EE] \)), they send invites via OfficeRouter to ownCloudServer. Occasionally, when someone needs to be removed from the invitee list, the inviter sends an uninvite. A sequence of one invite by a user (namely at OfficeRouter), optionally followed by one uninvite is called a share episode which is defined as: \[ SE = \{(i; u) \mid i \in \text{Invites} \land (u \in \text{range}(i) \lor u = \text{null})\} \quad \text{(SE)} \] For example, the share episode \( (\overline{a}, b, s; b) \) says that there is an invite, where Alice invites Bob and Sam (invitees), and \( b \) was subsequently removed by \( a \) from the list of invitees. In another episode \( (\overline{a}, b; \sim) \), the inviter is \( a \) and the invitee is \( b \), and no-one was uninvited. A requirement is a desired relationship between the variables referenced and constrained by the requirement, which in this case is the relationship between share request, unshare request at the interface \( a \) and shared and unshared at the interface \( b \). More specifically, the requirement is a set of pairs of share requests and invited users, where only those users in the share requests are invited: \[ \text{Req} = \{(\text{share request}[EE], \text{invited}(u_1) \ldots \text{invited}(u_n))\} \\ \text{EE} = \{u_1 \ldots u_n\} \] ### 2.3 The Adaptation Problem It is difficult to write design-time constraints for when to remove a user Bob from an invitee list and when not. We observe that this difficulty is related to two issues. First, since the machine ownCloud Server cannot observe the requirement variables such as EE in the phenomenon ER!share request[EE] directly, but must use the variable OR!Send(invite) instead, there is a gap between the specification phenomena and the requirement phenomena. Second, there is some uncertainty about the behavior of the environment, and in particular about Office Router. In the relationship between ER!Send(invite) and OR!Send(invite), for example, the router cannot guarantee that the invitee lists at the two interfaces are the same. The question of how unreliable a particular office router is cannot be known before the system becomes operational. Given these two issues, it is difficult to write a weakened specification at design time that will satisfy the requirement. 3 THE RELAIS APPROACH This section discusses and illustrates the key concepts of the RELAIS (Requirements Engineering Language for Adaptive Information Security) approach. We will define these concepts independently of existing requirements engineering languages (in the style of [17]), and exemplify them before presenting the Problem Frames-based method is selected and an architecture is selected before a run requirement is specified. An appropriate input approximation method is selected and an architecture is selected before a run requirement is specified. In our discussion of these concepts below, we will use the notion of observation. As used in the statistics literature [2], an observation refers to a recorded value of a variable of a simple or complex data type. 3.1 Observations of the system We will describe the behavior of system \( S \) as a totally ordered finite set of observations of the vector \( V \) of typed variables \( (f_1, f_2 \ldots f_n) \). The variables \( f_1, f_2 \ldots f_n \) in the vector are of specific types such as boolean, numeric, textual and so on. For example, the vector \( \{\text{share request}[\text{EE}], \text{OR!Send(invite)}, \text{OR!Send(invite), OCS!Add[U], OR!Add[U], invited[U]} \} \) contains all the variables to describe the behavior of the ownCloud system in Fig. 3. In this system, the following sequence of observations describes a behavior in which Alice invites Tom and Sam (na means not available): \[ \begin{align*} \{&\text{share request}[s,t], na, na, na, na, na; \\ &\text{na, Send(\overline{a}, s, t), na, na, na}; \\ &\text{na, na, Send(\overline{a}, s, t), na, na, na}; \\ &\text{na, na, na, Add[s], Add[t], na, na}; \\ &\text{na, na, na, na, Add[s], Add[t]}, na; \\ &\text{na, na, na, na, \{Invited(Sam), Invited(Tom)}\} \end{align*} \] (B1) Partial observations of \( V \), known as projections of \( S \), are described by subscripting the positions of values in the vector. The relation between full and partial observations is surjective. The first of the five observations above, and indeed all five observations, can be projected on the second variable as \( \text{Send}(\text{Tom}, s, t) \). Similarly, the entire sequence of the six observations in (B1) can be projected and simplified as: \[ \langle \text{share request}[s, t]; \text{Send}(\text{Tom}, s, t); \text{Send}(\text{Sam}, s, t); \text{Add}[s], \text{Add}[t]; \text{Add}[s], \text{Add}[t]; \text{Invited(Sam), Invited(Tom)} \rangle \] Notice that since we are not dealing with real-time constraints in this work, the sequencing of observations only indicates temporal ordering. However, time constraints can be handled by including the clock as one of the observed variables [17]. From the point of view of the machine, the variables in the vector can be characterized as follows. The machine can read values of (or observe) some variables in vector \( V \) (such as OR\text{Send}(\text{invite})) and can assign values to (or control) some of the variables (such as OCS\text{Add}(U)). The vector may also contain variables that the machine can neither read from nor write to (such as OR\text{Add}(U)). The variables in the vector may have causal and logical relationships between them but those relationships are not defined explicitly in the vector. ### 3.2 Episodic Requirements All possible sequences of observations is denoted as \( S^* \subseteq 2^S \). An episodic requirement is a relation between two sequences of observations, \( R_{req} \subseteq S^* \times S^* \), where the domain is a sequence containing observations of referenced variables, and the codomain is a sequence of observations of constrained variables. Since we will typically project sequences of observations, we will write an episodic requirement as relation between two projected observations. For example, the relation \( \langle (\text{share request}[\text{Sam}, \text{Tom}])_{1}, (\text{Invited(Sam), Invited(Tom)})_{0} \rangle \) says the values of the variables recorded, without saying anything about values other variables may have in the behavioral segment. Episodic requirements may be violated when the environment does not have the necessary property that the invite list does not change after the inviter has sent it. Given some behavior segments such as the following, \[ \langle \text{share request}[s, t]; \text{Send}(\text{Tom}, s, t); \text{Send}(\text{Sam}, s, t); \text{Add}[s], \text{Add}[t]; \\ \text{Add}[s], \text{Add}[t]; \text{Invited(Sam), Invited(Tom)} \rangle \] we can say whether they satisfy or violate a given requirement. The predicate \text{Satisfy1} will be used to say this more precisely, where the subscripts \( r \) and \( c \) represent the projection of the segment to the referenced and constrained variables of the requirement: \[ \text{Satisfy1}(\text{seg, Req}) \triangleq (\text{seq}_r, \text{seq}_c) \in \text{Req} \quad \text{(Sat1)} \] Negation of the predicate \text{Satisfy1} means that the segment does not satisfy the requirement. ### 3.3 Run Requirements A property of several segments with respect to a requirement is called a run requirement. The predicate \text{Satisfy2} can be used to say this more precisely. \[ \text{Satisfy2}(\text{segs, Req}) \triangleq \exists s \in \text{segs} \cdot \text{Satisfy1}(s, \text{Req}) \quad \text{(Sat2)} \] Negation of the predicate \text{Satisfy2} means that there is no segment in the set \text{segs} that satisfies the requirement. We will use the operator \# in \#\text{Satisfy2}(\text{segs, Req}) to count the number of segments in \text{segs} satisfying the requirement \text{Req}; similarly, \#\neg\text{Satisfy}(\text{segs, Req}) is the number of time \text{Req} is not satisfied in \text{segs}. For example, assuming \text{owc} contains all behavioral segments in the ownCloud system, \#\neg\text{Satisfy2}(\text{owc, R}) = 0 says that the requirement \text{Req} must never be violated. Similarly, a weaker form of a run requirement can be expressed as a relative number of the requirement violation. For example, \[ \#\neg\text{Satisfy2}(\text{owc, R}) \leq 0.1 \] says that the requirement \text{R} must not be violated for more 10% of the time it is satisfied. Therefore, even when an episodic requirement is violated from time to time, the corresponding run requirement can still be satisfied. In this sense, run requirements are regarded as second-order. In the ownCloud example, every share episode where someone is uninvited is a segment of behavior that violates the security requirement, and every share episode where no-one is uninvited is a segment of behavior that satisfies the security requirement. The adaptation requirement in this example is to minimize the number of times the security requirement is violated, meaning that the number of times a inviter has to uninvite an invitee reduce as more share requests are processed by the system. This adaptation requirement is intended to improve security. ![Figure 5: Problem Diagram: Adaptation in ownCloud](image) ### 3.4 Adaptive Specification When an episodic specification can violate its requirements from time to time, the behavior of the specification can be improved by introducing a new adaptive specification while extending the problem world. Fig. 5 shows a small extension to the Problem Frames syntax for writing run requirements. Notice that the lower half of the diagram contains the same domains as in Fig. 3. One way to read the insertion of Uninvite Reducer between ownCloud Server and Office Router is in terms of pipes and filters (as in Unix-like OSs). Uninvite Reducer acts as a filter, where the interface e is the pipe containing some parameter values. The intuitive idea is that at the point of the circle, Uninvite Reducer intercepts all send events controlled by Office Router, and Uninvite Reducer can manipulate the values of invite before it is visible to ownCloud Server. Through the interface f, Uninvite Reducer maintains all share episodes in the system. Here, Share Episodes captures the fact that reports of requirement violation by users by means of unshare request is used to guide the adaptation. This is related to that of requirements monitors in the ReqMon framework [18], but is different in the sense that it is not a software instrument and that it is intended to guide the adaptation. The adaptive specification Uninvite Reducer can be described as a function that, given the history of invites and uninvites, calculates the probability of a user being uninvited later by the inviter. More specifically, a valid specification of Uninvite Reducer for such a requirement needs to answer the question \( \langle \vec{a}, \vec{b}, s, t, ? \rangle \): that is when \( a \) invites \( b \), \( s \) and \( t \), how likely is it that any of the recipient is later uninvited (i.e. a self-invited user)? The possible outcomes are: \( \neg \) (no-one), \( b \), \( s \) or \( t \). The specification will then remove the likely self-invited user from the share request so that the inviter does not have to uninvite the user later. In a realistic setting, the number of share episodes will be large, and the uninvite data is noisy because some of the uninvite events may be due to mistakes by the inviter rather than attacks by a malicious user. The next section considers the use of statistical methods for implementing and evaluating Uninvite Reducer. 4 EXPERIMENTAL EVALUATION This section describes the use of Bayesian classifiers and the logistic regression method to implement Uninvite Reducer and evaluate the performance of both approaches using data generated by simulations of share episodes in ownCloud. In practice, such data is available from the access logs on the server. 4.1 Implementation Using Statistical Methods In order to implement Uninvite Reducer, we can construct a conditional probability model for a share request \( sr \) to find the user among the invitees of the request mostly likely to be uninvited. This can be cast as a Bayesian classification problem for ordinal class labels or a logistic regression problem. Let \( x_1, \ldots, x_n \) be the vectorisation of the user set (that is representing each member of the set by a binary variable in the vector), where \( |U| = n \). Let \( X \) be the merger of two user vectors \( x_1, \ldots, x_n, x_{n+1}, \ldots, x_{2n} \), so that the first \( n \) number of variables represent the invitee (therefore, exactly one variable is true in the group), and the rest are for invitees (therefore, in this group of variables, the inviter is always false, and one or more other variables are true). Let the class labels \( Y \) be the enumeration \( |U| \), so that \( y_1, \ldots, y_n \) denotes the user names. The invitee most likely to be uninvited in a share request, \( \hat{y} \), can be calculated as: \[ \hat{y} = \arg \max_{i \in \{1 \ldots n\}} \Pr(y_i) \prod_{j=1}^{2n} \Pr(x_j|y_i) \] ALGORITHM 1: Generate Share Requests **Input:** \( nobs \) and \( nuser \) representing numbers of observations and users respectively. **Output:** A share request matrix. ```plaintext 1. Let \( m \) and \( n \) be the number of observations and users respectively. 2. Let \( inviters \) and \( invitees \) be matrices \((m, n)\) of invitees. 3. Let \( sspace1 \) be a matrix \((m, n)\): A vector for sample space containing one 1 and \( n-1 \) number of 0s. 4. For \( i\) from 1 to \( m \) do: - Let \( inviters[i,] \) be a random permutation of \( sspace1 \). - Let \( invitees[i,] \) be a random permutation of \( sspace1 \). 5. Let \( sharerequests \) be a column bind of \( inviters, invitees \). ``` An alternative to the naive Bayes approach is logistic regression [7], commonly used when the dependent variable has only two possible values. In our attack scenario, the dependent variable is the label for potential attackers, denoted as \( z_i \), which has two possible values: a user \( y_i \) is an attacker \((z_i = 1)\) or not \((z_i = 0)\). The independent variables are the inviter and invitee vectors \( x_1, \ldots, x_n \), \( x_{n+1}, \ldots, x_{2n} \). Let \( p \) be the probability of an event \( z_i = 1 \), and thus, \( 1 - p \) is the probability of \( z_i = 0 \). The logistic regression model can be built as \[ \log \frac{p}{1-p} = \beta_0 + \beta_1 x_1 + \cdots + \beta_{n+n} x_{n+n} \] 4.2 Data Algorithm 1 is used to generate a matrix containing a given number of share requests \( nobs \) for a given number of users \( nuser \). There are two parts in the matrix \( sharerequests \): for the inviter part \( inviters \), each user is represented by a binary variable, but only one of them can take the value 1, and the rest must take the value 0 (only one inviter in every request). The algorithm achieves this by performing a random permutation of a vector with appropriate values (Lines 4–7). For the invitees’ part \( invitees \), there are two constraints to satisfy: the inviter must not be an invitee, and the number of invitees per share request must be greater than zero. The algorithm achieves this by first recording the index of the inviter in each share request (Lines 10–12), and filling the rest of the columns with random binary values, before inserting 0 at the position of the inviter. The algorithm ensures that there is at least one invitee by adding one 1 in the sample space (Lines 13–16). The inviters and invitees matrices are then bound side by side. ### 4.3 Labelling Having generated the share request data, we assign the labels indicating whether each observation is an attack or non-attack. Since the question of any user being an attacker is a binary classification problem, class labels 0 and 1 are used for non-attack and attack respectively. In the training data, we assign the label 1 to every observation where a particular inviter is 1, and some invitees including the attacker are also 1. All other observations get the label 0. In the generation of 100 observations with six users, the number of uninvite cases (label 1) is around 6% (the percentage changes when the number of users changes). So the balance between the two labels is hugely in favor of non-attack, which is realistic because in practice the number of attacks is relatively low. This issue of class imbalance is well known. Therefore, it is not meaningful to look at the overall accuracy of classification. We will instead examine the accuracy for each class in order to avoid potential bias. ### 4.4 Results Fig. 6 shows the performance of naive Bayes classifiers as the number of share requests increases. In each run, x number of attack --- 1. All our code in R is available from https://github.com/ttt23/SEAMS-2018. cases are randomly selected, together with a proportional number of non-attack cases according to the class balance. The figure for each run is computed by sampling 10 times from the same dataset of 100 observations (in the style of 10-fold cross validation [10]). Therefore, for Fig. 6a for example, we have constructed and tested 60 classifiers. When constructing a classifier, the entire dataset is used for testing in every case. As Fig. 6a shows, the accuracy for classifying attack cases starts from around 30% and increases with the attack cases. As the classifiers observe more attack cases, the accuracy increases and achieves 100% when six attack observations are made. That is, in order to identify an attacker correctly, a classifier needs to see at least six uninvite requests if the number of observations is 100. In contrast, the accuracy for classifying non-attack cases is always 100%. This is because the classifiers observe only one attack case but several non-attack cases at the beginning, and are thus biased towards the non-attack cases. In practice, it means that the users of this system will not get false positives, but some attack cases will be missed initially when there are not enough attack cases to sample. As Fig. 6b and 6c show, the accuracy of classifying attack cases is close to 100% when the number of attack cases increases to five. Fig. 6d–6f show the changes in the accuracy as the number of users increases to nine, and as result the class ratio changes to around three attack cases in 100 observations. The accuracy of classifying attack cases is comparable in cases of six users and nine users if the accuracy per number of attack cases is considered. Both 6e–6f show that accuracy for attack cases is close to 100% when the observations include six attack cases. In cases of nine users, the accuracy for classifying non-attack cases is also perfect, because there is always a sufficient number of non-attack observations even when the number of attack cases is one. Fig. 7 shows the performance of naive Bayes classifiers as the total number of observations increases over time, again according to the class ratio. In each run, x number of observations are randomly selected from the same dataset of 300 observations, where the balance between attack and non-attack cases is kept at 6%. The accuracy of classifying non-attack cases hovers around 100%, while the accuracy of classifying attack cases gets better slowly. The increase in the number of users makes the classifiers generally less accurate for the security cases. We now construct logistic regression classifiers by using the model described in Section 4.1 and the same dataset generated in the naive Bayes experiments. Fig. 8 shows the performance of the logistic regression approach when the numbers of users and overall observations vary. Unlike the naive Bayes classifiers, the accuracy of the logistic classifiers for classifying both non-attack and attack cases starts from around 85% and 60% respectively (Fig. 8a–8c). The accuracy gets better as the number of attack observations increases. In order to identify both an attack and non-attack cases correctly, the classifiers need to observe at least four attack cases and around 64 non-attack cases. Increasing the number of users (Fig. 8d–8f) has little impact on the performance when the number of attack cases are comparable. Finally, Fig. 9 shows the performance of logistic regression classifiers when the total number of instances increases according to class ratio. It shows that accuracy for attack cases is zero until around 17 instances are observed when the number of users is six (Fig. 9a–9c), suggesting that the classifiers need to see at least two attack cases (similar to Fig. 7). The same is true when there are nine users. Accuracy for both cases improves as the number of combined instances increases. In general, it shows that both approaches perform well for detecting potential attack cases. When there is a short history of uninvite, Naive Bayes is generally good at identifying non-attack cases but is poor at identifying attack cases. On the other hand, the logistic regression method can identify attack cases with high accuracy even when there are few attack observations. The downside is that these classifiers also give a higher number of false positives, as some of the non-attack cases are incorrectly classified as attack cases, especially when the accuracy for the attack cases picks up. 5 RELATED WORK Since this paper is primarily about requirements and specification of systems with adaptive security behavior, the work discussed here is related to the areas of requirements engineering for adaptive security systems, the use of statistical approaches to security and the detection and prevention of parameter tampering attacks. Existing research on the modeling of security requirements, threats, and requirements evolution is not directly related to this work and is therefore not covered here. 5.1 Adaptation There are several related notions around the term adaptation including context-awareness, and self-adaptation. Many of these notions and their significance to research have been discussed many times previously (for instance [20]). Instead of repeating the discussion, we will focus on how the term adaptation is used. Perhaps the earliest example of adaptation is PID controller, which uses feedback from the environment to compute deviation from the desired target value and applies correction when necessary. The notion of adaptation described in this paper is similar to that. However, our focus is on the requirements, and how they can be structured. Broy et al. [3] give a definition of adaptation, which allows us to distinguish adaptive behavior from non-adaptive behavior. According to their definition, a system is non-adaptive if its behavior is determined exclusively by the user input. A system is adaptive if the output is determined both by the input values the user provides to the system, as well as other values (they are called indirect/implicit inputs) available to the system, which the user may or may not be aware of. Therefore, from the user’s point of view, the machine may appear non-deterministic, although the system is not actually non-deterministic, since it uses implicit input values to determine the behavior. The implicit values are part of the context. Adaptation is non-transparent if the user cannot observe the context that affects the machine behavior; transparent if the user can observe part of the context but not change it; and diverted if the user can control part of the context. The main limitation of this definition is that adaptation is fundamentally about the user’s perception of how their input is processed by the machine. Critically, in their definition a user does not guide the system from exhibiting unwanted behavior to exhibiting wanted behavior. Furthermore, their notion of adaptation does not distinguish between those parts of the environment that may cooperate and those that may not when adapting, which is critical in security problems. 5.2 Requirements Engineering Approaches to Adaptation RELAX [26] proposes a way of writing declarative requirements in order to allow for environmental uncertainty. To achieve this, requirements engineers first separate requirements that must be satisfied at all times (invariants) from requirements that do not have to be satisfied under certain environmental conditions. Requirements in the latter category are rephrased using special operators to indicate the fact that they may not be fully satisfied, if at all, from time to time, due to some uncertainty about the environmental properties. Those operators include AS MANY AS POSSIBLE, and AS EARLY AS POSSIBLE. What the RELAX language and the approach provide is a way of weakening requirements at design time so that the specifications allow more behavior that can satisfy the requirements under different environmental conditions. In our example, it means rewriting requirements such as Req by adding operators for weakening it. However, such weakened requirements are in general non-deontic (non-obligatory) in the sense that the system is no longer obliged to satisfy them at all. Conceptually, the improvement we have brought over the RELAX approach is a way of relating run requirements with episodic requirements. As a result episodic requirements are still binding, and that adaptation is achieved not just by relaxing, but by allowing the specification for run requirements to modify the specification of the episodic requirements. The user is expected to help ‘correct’ the system behavior over time through error reporting. User participation is therefore fundamental to our notion of adaptation. Unlike our approach, RELAX provides no implementation architecture for adaptation, and as a result it is not clear how the user might (not) be involved in the adaptation process, and how the system may change its behavior if the user can help identify unwanted behavior. Finally, RELAX allows uncertainty in both the input values and the output values (system actions). We are currently restricted to uncertainty in input values but not in output values. In the ADAM approach for handling uncertainty in non-functional requirements, such as response time and usability, there are two main levels of abstraction [6]. In the modeling phase, they first describe the “abstract functionalities” of a system, i.e. the system implementing the functional requirements, using a workflow, such as a UML activity diagram. Each abstract functionality may have one or more concrete implementations. Each implementation has annotations of their impact on non-functional requirements. For example, the “product lookup” abstract functionality may be implemented by an API call to searchpc.com, or through manual input of the user. For each implementation, there will be different values for response time, usability and energy consumption. The problem is to find a composition of the concrete functionalities at runtime that give the best satisfaction of all non-functional requirements. There are some similarities with our approach, such as the probabilistic characterization of the environment, and incompleteness of knowledge about the environment at design time. One key, perhaps complementary, difference is that we are concerned with finding the most likely correct input value, before choosing the output value that is already associated by the requirement with the input value. In other words, once the input is known, the output is certain. In their work, that is not the case. They are concerned with choosing output values that will satisfy non-functional requirements to the best extent possible. However, unlike in the RELAIS approach, their specifications of non-functional requirements do not modify the behavior of concrete functionalities: they simply choose the best configuration of the pre-defined behaviors. The notion of awareness requirements [23, 24] has also been used to describe the requirements for adaptation and evolution in software systems. Awareness requirements are about the success and failure of other requirements. An awareness requirement for example can state that a particular requirement must never fail. Similarly, it can also state the ratio of success to failure, or state the trend over time and so on. Patterns of awareness requirements have been presented in [24]. There are some similarities between their work and the RELAIS approach. Their notion of an awareness requirement is similar to the notion of run requirements, but our treatment of run requirements as a property of behavioral segments is more precise. Both episodic and run requirements in the RELAIS approach are properties of system behavior, rather than properties and properties of properties. More importantly, run requirements are not just about expressing the desired properties, but also about modifying the behavior of episodic specifications in order to adapt. There are several architectural approaches to dealing with adaptation [11, 16, 28]. An important theme in this line of work is the description and analysis of how components and their connections may change at runtime in response to environmental conditions. The general architecture for adaptation used by RELAIS is similar to the wrapper architectural style. The RELAIS approach also emphasizes the role of requirements and how they can be structured in order to highlight the need for adaptation at runtime. 5.3 Statistical approaches for security Anomaly detection techniques [4] have been applied extensively for improving security (such as intrusion detection in computer networks), fraud detection in banking systems and so on. Classification techniques are frequently used when partial labels are available. In this work, we have shown that classification techniques can be used for adaptation as well as for security. 5.4 Detecting and Preventing Parameter Tampering Attacks Much of the existing work on parameter tampering attacks use syntax-based approaches to detect and prevent potential attacks. These approaches include static analysis of design for potential weaknesses [12], and dynamic analysis to prevent attacks [22], and they tend to assume that the attackers are not insiders, and therefore have not legitimate access to the systems. In our attack scenarios, we assume that attackers are insiders, and are therefore more difficult to detect. As a result, the detection of potential attack cannot be sound: it will depend on the availability of good training data. Having said that we have shown that generic classification algorithms can learn to accurately classify the attacks very quickly. 6 CONCLUSION This paper has proposed a notion of runtime adaptation for security-critical systems, where knowledge about the environment is partial at design time. The challenge is to use knowledge discovered at runtime to identify appropriate adaptation behavior in order to improve the system security. We have described the input approximation method for adaptation, and how the method can be implemented using constraint-based program methods and statistical machine learning techniques. Unlike existing approaches that aim to prevent security breaches at runtime, the RELAIS approach aims to exploit knowledge about the environment discovered at runtime (during failures), and use the knowledge to identify appropriate adaptation behavior in the future. The proposed approach is particularly useful, both conceptually and practically, when dealing with parameter tampering attacks where it is not possible to design all the defense mechanisms at design time. Having said that, we suggest that adaptation via input approximation is a general concept that can be applied to any system whose behavior is determined by its input values and we plan to explore this in future work. ACKNOWLEDGMENTS We thank the anonymous reviewers for insightful comments and suggestions, and Michael Jackson for guidance and encouragement. This work is supported by SFI grant 13/RC/2094, QNRF NPRP 5-079-1-018, and ERC Advanced Grant no. 291652 (ASAP). REFERENCES
{"Source-Url": "https://oro.open.ac.uk/54029/1/Tun-RELAIS-SEAMS18-crc.pdf.pdf", "len_cl100k_base": 10013, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42718, "total-output-tokens": 12617, "length": "2e13", "weborganizer": {"__label__adult": 0.0003361701965332031, "__label__art_design": 0.00047469139099121094, "__label__crime_law": 0.0009670257568359376, "__label__education_jobs": 0.001277923583984375, "__label__entertainment": 8.952617645263672e-05, "__label__fashion_beauty": 0.00017464160919189453, "__label__finance_business": 0.0004780292510986328, "__label__food_dining": 0.00031447410583496094, "__label__games": 0.0007543563842773438, "__label__hardware": 0.0009069442749023438, "__label__health": 0.0005512237548828125, "__label__history": 0.0002567768096923828, "__label__home_hobbies": 0.00010979175567626952, "__label__industrial": 0.0004992485046386719, "__label__literature": 0.000347137451171875, "__label__politics": 0.0003724098205566406, "__label__religion": 0.0003707408905029297, "__label__science_tech": 0.09820556640625, "__label__social_life": 0.00011110305786132812, "__label__software": 0.0209503173828125, "__label__software_dev": 0.87158203125, "__label__sports_fitness": 0.00023365020751953125, "__label__transportation": 0.00037741661071777344, "__label__travel": 0.0001615285873413086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51358, 0.02783]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51358, 0.61915]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51358, 0.90143]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5045, false], [5045, 11217, null], [11217, 15710, null], [15710, 18440, null], [18440, 23864, null], [23864, 29684, null], [29684, 31119, null], [31119, 32877, null], [32877, 39578, null], [39578, 46344, null], [46344, 51358, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5045, true], [5045, 11217, null], [11217, 15710, null], [15710, 18440, null], [18440, 23864, null], [23864, 29684, null], [29684, 31119, null], [31119, 32877, null], [32877, 39578, null], [39578, 46344, null], [46344, 51358, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51358, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51358, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5045, 2], [5045, 11217, 3], [11217, 15710, 4], [15710, 18440, 5], [18440, 23864, 6], [23864, 29684, 7], [29684, 31119, 8], [31119, 32877, 9], [32877, 39578, 10], [39578, 46344, 11], [46344, 51358, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51358, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
3ae0c7628316bb6c9bffd55c27793447fe449d7b
Machine-Level Programming V: Advanced Topics 15-213/18-213/14-513/15-513: Introduction to Computer Systems 9th Lecture, September 25, 2018 Today - Memory Layout - Buffer Overflow - Vulnerability - Protection - Unions x86-64 Linux Memory Layout - **Stack** - Runtime stack (8MB limit) - E.g., local variables - **Heap** - Dynamically allocated as needed - When call `malloc()`, `calloc()`, `new()` - **Data** - Statically allocated data - E.g., global vars, `static` vars, string constants - **Text / Shared Libraries** - Executable machine instructions - Read-only Hex Address: 00007FFFFFFF000000 00007FFFFFFFFFFF Shared Libraries 00007FFFFFFF000000 Stack 8MB Heap Data Text Memory Allocation Example ```c char big_array[1L<<24]; /* 16 MB */ char huge_array[1L<<31]; /* 2 GB */ int global = 0; int useless() { return 0; } int main () { void *p1, *p2, *p3, *p4; int local = 0; p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ p3 = malloc(1L << 32); /* 4 GB */ p4 = malloc(1L << 8); /* 256 B */ /* Some print statements ... */ } ``` Where does everything go? x86-64 Example Addresses address range $\sim 2^{47}$ local p1 p3 p4 p2 big_array huge_array main() useless() 0x00007ffe4d3be87c 0x00007f7262a1e010 0x00007f7162a1d010 0x0000000008359d120 0x0000000008359d010 0x00000000080601060 0x0000000000601060 0x0000000000040060c 0x00000000000400590 not drawn to scale Runaway Stack Example ``` int recurse(int x) { int a[1<<15]; // 4*2^15 = 128 KiB printf("x = %d. a at %p\n", x, a); a[0] = (1<<14)-1; a[a[0]] = x-1; if (a[a[0]] == 0) return -1; return recurse(a[a[0]]) - 1; } ``` - Functions store local data on in stack frame - Recursive functions cause deep nesting of frames ``` ./runaway 67 x = 67. a at 0x7ffd18aba930 x = 66. a at 0x7ffd18a9a920 x = 65. a at 0x7ffd18a7a910 x = 64. a at 0x7ffd18a5a900 . . . x = 4. a at 0x7ffd182da540 x = 3. a at 0x7ffd182ba530 x = 2. a at 0x7ffd1829a520 Segmentation fault (core dumped) ``` Today - Memory Layout - Buffer Overflow - Vulnerability - Protection - Unions Recall: Memory Referencing Bug Example typedef struct { int a[2]; double d; } struct_t; double fun(int i) { volatile struct_t s; s.d = 3.14; s.a[i] = 1073741824; /* Possibly out of bounds */ return s.d; } fun(0) -> 3.1400000000 fun(1) -> 3.1400000000 fun(2) -> 3.1399998665 fun(3) -> 2.0000006104 fun(6) -> Stack smashing detected fun(8) -> Segmentation fault - Result is system specific Memory Referencing Bug Example typedef struct { int a[2]; double d; } struct_t; fun(0) -> 3.1400000000 fun(1) -> 3.1400000000 fun(2) -> 3.1399998665 fun(3) -> 2.000006104 fun(4) -> Segmentation fault fun(8) -> 3.1400000000 Explanation: <table> <thead> <tr> <th>Location accessed by fun(i)</th> <th>struct_t</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>a[0]</td> </tr> <tr> <td>1</td> <td>a[1]</td> </tr> <tr> <td>2</td> <td>d3 ... d0</td> </tr> <tr> <td>3</td> <td>d7 ... d4</td> </tr> <tr> <td>4</td> <td>Critical State</td> </tr> <tr> <td>5</td> <td>Critical State</td> </tr> <tr> <td>6</td> <td>Critical State</td> </tr> <tr> <td>7</td> <td>Critical State</td> </tr> <tr> <td>8</td> <td>???</td> </tr> </tbody> </table> Such problems are a BIG deal - Generally called a “buffer overflow” - when exceeding the memory size allocated for an array - Why a big deal? - It’s the #1 technical cause of security vulnerabilities - #1 overall cause is social engineering / user ignorance - Most common form - Unchecked lengths on string inputs - Particularly for bounded character arrays on the stack - sometimes referred to as stack smashing String Library Code - Implementation of Unix function `gets()` ```c /* Get string from stdin */ char *gets(char *dest) { int c = getchar(); char *p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; } ``` - No way to specify limit on number of characters to read - Similar problems with other library functions - `strcpy, strcat`: Copy strings of arbitrary length - `scanf, fscanf, sscanf`, when given `%s` conversion specification Vulnerable Buffer Code ```c /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } void call_echo() { echo(); } ``` ↔ btw, how big is big enough? ```bash unix> ./bufdemo-nsp Type a string: 01234567890123456789012 01234567890123456789012 unix> ./bufdemo-nsp Type a string: 012345678901234567890123 012345678901234567890123 Segmentation Fault ``` Buffer Overflow Disassembly echo: <table> <thead> <tr> <th>Address</th> <th>Assembly</th> <th>Disassembly</th> </tr> </thead> <tbody> <tr> <td>4006cf:</td> <td>48 83 ec 18</td> <td>sub $0x18,%rsp</td> </tr> <tr> <td>4006d3:</td> <td>48 89 e7</td> <td>mov %rsp,%rdi</td> </tr> <tr> <td>4006d6:</td> <td>e8 a5 ff ff ff</td> <td>callq 400680 &lt;gets&gt;</td> </tr> <tr> <td>4006db:</td> <td>48 89 e7</td> <td>mov %rsp,%rdi</td> </tr> <tr> <td>4006de:</td> <td>e8 3d fe ff ff</td> <td>callq 400520 <a href="mailto:puts@plt">puts@plt</a></td> </tr> <tr> <td>4006e3:</td> <td>48 83 c4 18</td> <td>add $0x18,%rsp</td> </tr> <tr> <td>4006e7:</td> <td>c3</td> <td>retq</td> </tr> </tbody> </table> call_echo: <table> <thead> <tr> <th>Address</th> <th>Assembly</th> <th>Disassembly</th> </tr> </thead> <tbody> <tr> <td>4006e8:</td> <td>48 83 ec 08</td> <td>sub $0x8,%rsp</td> </tr> <tr> <td>4006ec:</td> <td>b8 00 00 00 00 00</td> <td>mov $0x0,%eax</td> </tr> <tr> <td>4006f1:</td> <td>e8 d9 ff ff ff</td> <td>callq 4006cf &lt;echo&gt;</td> </tr> <tr> <td>4006f6:</td> <td>48 83 c4 08</td> <td>add $0x8,%rsp</td> </tr> <tr> <td>4006fa:</td> <td>c3</td> <td>retq</td> </tr> </tbody> </table> Buffer Overflow Stack Before call to gets Stack Frame for call_echo Return Address (8 bytes) 20 bytes unused buf ← %rsp /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } echo: subq $24, %rsp movq %rsp, %rdi call gets ... Buffer Overflow Stack Example Before call to gets void echo() { char buf[4]; gets(buf); ... } 20 bytes unused Buffer contents: [3][2][1][0] buf ← %rsp call_echo: . . . 4006f1: callq 4006cf <echo> 4006f6: add $0x8,%rsp . . . echo: subq $x18, %rsp movq %rsp, %rdi call gets . . . Buffer Overflow Stack Example #1 After call to gets Stack Frame for call_echo <table> <thead> <tr> <th>00</th> <th>00</th> <th>00</th> <th>00</th> </tr> </thead> <tbody> <tr> <td>00</td> <td>40</td> <td>06</td> <td>f6</td> </tr> <tr> <td>00</td> <td>32</td> <td>31</td> <td>30</td> </tr> <tr> <td>39</td> <td>38</td> <td>37</td> <td>36</td> </tr> <tr> <td>35</td> <td>34</td> <td>33</td> <td>32</td> </tr> <tr> <td>31</td> <td>30</td> <td>39</td> <td>38</td> </tr> <tr> <td>37</td> <td>36</td> <td>35</td> <td>34</td> </tr> <tr> <td>33</td> <td>32</td> <td>31</td> <td>30</td> </tr> </tbody> </table> void echo() { char buf[4]; gets(buf); ... } echo: subq $0x18, %rsp movq %rsp, %rdi call gets ... call_echo: ... 4006f1: callq 4006cf <echo> 4006f6: add $0x8,%rsp ... buf ← %rsp unix>./bufdemo-nsp Type a string: 01234567890123456789012 01234567890123456789012 "01234567890123456789012\0" Overflowed buffer, but did not corrupt state Buffer Overflow Stack Example #2 After call to gets void echo() { char buf[4]; gets(buf); . . . } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: . . . 4006f1: callq 4006cf <echo> 4006f6: add $0x8,%rsp . . . buf ← %rsp unix> ./bufdemo-nsp Type a string: 012345678901234567890123 012345678901234567890123 Segmentation fault Program “returned” to 0x0400600, and then crashed. Stack Smashing Attacks - Overwrite normal return address A with address of some other code S - When Q executes `ret`, will jump to other code ```c int Q() { char buf[64]; gets(buf); ... return ...; } ``` ```c void P(){ Q(); ... } ``` ```c void S(){ /* Something unexpected */ ... } ``` Crafting Smashing String Stack Frame for call echo <p>| | | | | | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>48</td> <td>83</td> </tr> <tr> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>40</td> <td>06</td> </tr> <tr> <td>fb</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> %rsp 24 bytes Target Code ```c int echo() { char buf[4]; gets(buf); ... return ...; } ``` void smash() { printf("I've been smashed!\n"); exit(0); } 00000000004006fb <smash>: 4006fb: 48 83 ec 08 Attack String (Hex) 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 fb 06 40 00 00 00 00 00 00 Smashing String Effect Stack Frame for `call echo` <table> <thead> <tr> <th>00</th> <th>00</th> <th>00</th> <th>00</th> </tr> </thead> <tbody> <tr> <td>00</td> <td>48</td> <td>83</td> <td>80</td> </tr> <tr> <td>00</td> <td>00</td> <td>00</td> <td>00</td> </tr> <tr> <td>00</td> <td>40</td> <td>06</td> <td>fb</td> </tr> <tr> <td>33</td> <td>32</td> <td>31</td> <td>30</td> </tr> <tr> <td>39</td> <td>38</td> <td>37</td> <td>36</td> </tr> <tr> <td>35</td> <td>34</td> <td>33</td> <td>32</td> </tr> <tr> <td>31</td> <td>30</td> <td>39</td> <td>38</td> </tr> <tr> <td>37</td> <td>36</td> <td>35</td> <td>34</td> </tr> <tr> <td>33</td> <td>32</td> <td>31</td> <td>30</td> </tr> </tbody> </table> %rsp Target Code ```c void smash() { printf("I've been smashed!\n"); exit(0); } ``` Attack String (Hex) ```hex 000000000004006fb <smash>: 4006fb: 48 83 ec 08 ``` ```hex 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 fb 06 40 00 00 00 00 00 00 ``` Code Injection Attacks - Input string contains byte representation of executable code - Overwrite return address A with address of buffer B - When Q executes `ret`, will jump to exploit code How Does The Attack Code Execute? ```c void P() { Q(); ... } int Q() { char buf[64]; gets(buf); // A->B ... return ...; } ``` What To Do About Buffer Overflow Attacks - Avoid overflow vulnerabilities - Employ system-level protections - Have compiler use “stack canaries” - Lets talk about each... 1. Avoid Overflow Vulnerabilities in Code (!) For example, use library routines that limit string lengths - fgets instead of gets - strncpy instead of strcpy - Don’t use scanf with %s conversion specification - Use fgets to read the string - Or use %ns where n is a suitable integer 2. System-Level Protections can help - Randomized stack offsets - At start of program, allocate random amount of space on stack - Shifts stack addresses for entire program - Makes it difficult for hacker to predict beginning of inserted code - E.g.: 5 executions of memory allocation code - Stack repositioned each time program executes ``` local 0x7ffe4d3be87c 0x7fff75a4f9fc 0x7ffeadb7c80c 0x7ffeaea2fdac 0x7ffcd452017c ``` Diagram: - Stack base - Random allocation - main - Application Code - B? - pad - exploit code - B? 2. System-Level Protections can help - **Nonexecutable code segments** - In traditional x86, can mark region of memory as either “read-only” or “writeable” - Can execute anything readable - x86-64 added explicit “execute” permission - Stack marked as non-executable Any attempt to execute this code will fail 3. Stack Canaries can help - **Idea** - Place special value (“canary”) on stack just beyond buffer - Check for corruption before exiting function - **GCC Implementation** - `-fstack-protector` - Now the default (disabled earlier) ``` unix>./bufdemo-sp Type a string: 0123456 0123456 unix>./bufdemo-sp Type a string: 01234567 *** stack smashing detected *** ``` Protected Buffer Disassembly echo: <table> <thead> <tr> <th>Address</th> <th>Instruction</th> <th>Label</th> </tr> </thead> <tbody> <tr> <td>40072f</td> <td>sub $0x18,%rsp</td> <td></td> </tr> <tr> <td>400733</td> <td>mov %fs:0x28,%rax</td> <td></td> </tr> <tr> <td>40073c</td> <td>mov %rax,0x8(%rsp)</td> <td></td> </tr> <tr> <td>400741</td> <td>xor %eax,%eax</td> <td></td> </tr> <tr> <td>400743</td> <td>mov %rsp,%rdi</td> <td></td> </tr> <tr> <td>400746</td> <td>callq 4006e0 &lt;gets&gt;</td> <td></td> </tr> <tr> <td>40074b</td> <td>mov %rsp,%rdi</td> <td></td> </tr> <tr> <td>40074e</td> <td>callq 400570 <a href="mailto:puts@plt">puts@plt</a></td> <td></td> </tr> <tr> <td>400753</td> <td>mov 0x8(%rsp),%rax</td> <td></td> </tr> <tr> <td>400758</td> <td>xor %fs:0x28,%rax</td> <td></td> </tr> <tr> <td>400761</td> <td>je 400768 &lt;echo+0x39&gt;</td> <td></td> </tr> <tr> <td>400763</td> <td>callq 400580 __stack_chk_fail@plt</td> <td></td> </tr> <tr> <td>400768</td> <td>add $0x18,%rsp</td> <td></td> </tr> <tr> <td>40076c</td> <td>retq</td> <td></td> </tr> </tbody> </table> Setting Up Canary Before call to gets ``` /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } ``` Return Address (8 bytes) Canary (8 bytes) ``` buf ← %rsp ``` echo: ``` . . . movq %fs:40, %rax # Get canary movq %rax, 8(%rsp) # Place on stack xorl %eax, %eax # Erase canary . . . ``` Checking Canary After call to gets ```c /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } ``` Input: 0123456 ``` buf ← %rsp ``` ``` echo: movq 8(%rsp), %rax # Retrieve from stack xorq %fs:40, %rax # Compare to canary je .L6 call __stack_chk_fail # FAIL ``` Return-Oriented Programming Attacks - **Challenge (for hackers)** - Stack randomization makes it hard to predict buffer location - Marking stack nonexecutable makes it hard to insert binary code - **Alternative Strategy** - Use existing code - E.g., library code from stdlib - String together fragments to achieve overall desired outcome - *Does not overcome stack canaries* - **Construct program from gadgets** - Sequence of instructions ending in `ret` - Encoded by single byte `0xc3` - Code positions fixed from run to run - Code is executable Gadget Example #1 ```c long ab_plus_c (long a, long b, long c) { return a*b + c; } ``` Gadget address = 0x4004d4 - Use tail end of existing functions Gadget Example #2 ```c void setval(unsigned *p) { *p = 3347663060u; } ``` Encodes `movq %rax, %rdi` ``` <setval>: 4004d9: c7 07 d4 48 89 c7 4004df: c3 ``` Gadget address = 0x4004dc - Repurpose byte codes ROP Execution - Trigger with `ret` instruction - Will start executing Gadget 1 - Final `ret` in each gadget will start next one Crafting an ROB Attack String **Stack Frame for call echo** <p>| | | | | | | | | | | | | | | | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td></td> </tr> </tbody> </table> **Gadget** \[ \begin{align*} 00000000004004d0 & <\text{ab\_plus\_c}>: \\ 4004d0: & 48 \text{ 0f} \text{ af} \text{ fe} \text{ imul} \%\text{rsi,}\%\text{rdi} \\ 4004d4: & 48 \text{ 8d} \text{ 04} \text{ 17} \text{ lea} \left(\%\text{rdi,}\%\text{rdx,1}\right),\%\text{rax} \\ 4004d8: & \text{c3} \text{ retq} \end{align*} \] **Attack String (Hex)** ``` 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 d4 04 40 00 00 00 00 00 00 00 ``` Multiple gadgets will corrupt stack upwards Quiz Time! Check out: https://canvas.cmu.edu/courses/5835 Today - Memory Layout - Buffer Overflow - Vulnerability - Protection - Unions Union Allocation - Allocate according to largest element - Can only use one field at a time ```c union U1 { char c; int i[2]; double v; } *up; ``` ```c struct S1 { char c; int i[2]; double v; } *sp; ``` Diagram: ``` <p>| | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>c</td> <td>i[0]</td> <td>i[1]</td> </tr> <tr> <td>3 bytes</td> <td>4 bytes</td> <td></td> </tr> <tr> <td>sp+0</td> <td>sp+4</td> <td>sp+8</td> </tr> </tbody> </table> ``` Using Union to Access Bit Patterns typedef union { float f; unsigned u; } bit_float_t; float bit2float(unsigned u) { bit_float_t arg; arg.u = u; return arg.f; } unsigned float2bit(float f) { bit_float_t arg; arg.f = f; return arg.u; } Same as (float) u? Same as (unsigned) f? Byte Ordering Revisited ■ Idea ▪ Short/long/quad words stored in memory as 2/4/8 consecutive bytes ▪ Which byte is most (least) significant? ▪ Can cause problems when exchanging binary data between machines ■ Big Endian ▪ Most significant byte has lowest address ▪ Sparc, Internet ■ Little Endian ▪ Least significant byte has lowest address ▪ Intel x86, ARM Android and IOS ■ Bi Endian ▪ Can be configured either way ▪ ARM Byte Ordering Example union { unsigned char c[8]; unsigned short s[4]; unsigned int i[2]; unsigned long l[1]; } dw; How are the bytes inside short/int/long stored? Memory addresses growing **32-bit** <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>i[0]</td> <td></td> <td></td> <td></td> <td>i[1]</td> <td></td> <td></td> <td></td> </tr> <tr> <td>l[0]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **64-bit** <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>i[0]</td> <td></td> <td></td> <td></td> <td>i[1]</td> <td></td> <td></td> <td></td> </tr> <tr> <td>l[0]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Byte Ordering Example (Cont). ```c int j; for (j = 0; j < 8; j++) dw.c[j] = 0xf0 + j; printf("Characters 0-7 == [0x%x,0x%x,0x%x,0x%x,0x%x,0x%x,0x%x]\n", dw.c[0], dw.c[1], dw.c[2], dw.c[3], dw.c[4], dw.c[5], dw.c[6], dw.c[7]); printf("Shorts 0-3 == [0x%x,0x%x,0x%x,0x%x]\n", dw.s[0], dw.s[1], dw.s[2], dw.s[3]); printf("Ints 0-1 == [0x%x,0x%x]\n", dw.i[0], dw.i[1]); printf("Long 0 == [0x%lx]\n", dw.l[0]); ``` Byte Ordering on IA32 Little Endian <table> <thead> <tr> <th>f0</th> <th>f1</th> <th>f2</th> <th>f3</th> <th>f4</th> <th>f5</th> <th>f6</th> <th>f7</th> </tr> </thead> <tbody> <tr> <td>i[0]</td> <td>i[1]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>l[0]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> LSB MSB LSB MSB Print Output: Characters 0–7 == [0xf0,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7] Shorts 0–3 == [0xf1f0,0xf3f2,0xf5f4,0xf7f6] Ints 0–1 == [0xf3f2f1f0,0xf7f6f5f4] Long 0 == [0xf3f2f1f0] ## Byte Ordering on Sun ### Big Endian <table> <thead> <tr> <th>f0</th> <th>f1</th> <th>f2</th> <th>f3</th> <th>f4</th> <th>f5</th> <th>f6</th> <th>f7</th> </tr> </thead> <tbody> <tr> <td>i[0]</td> <td>i[1]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>l[0]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Output on Sun:** - **Characters** 0–7 == \[0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7\] - **Shorts** 0–3 == \[0xf0f1, 0xf2f3, 0xf4f5, 0xf6f7\] - **Ints** 0–1 == \[0xf0f1f2f3, 0xf4f5f6f7\] - **Long** 0 == \[0xf0f1f2f3\] ### Byte Ordering on x86-64 #### Little Endian <table> <thead> <tr> <th>f0</th> <th>f1</th> <th>f2</th> <th>f3</th> <th>f4</th> <th>f5</th> <th>f6</th> <th>f7</th> </tr> </thead> <tbody> <tr> <td>i[0]</td> <td>i[1]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1[0]</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Output on x86-64:** - **Characters 0–7** \[0xf0,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7\] - **Shorts 0–3** \[0xf1f0,0xf3f2,0xf5f4,0xf7f6\] - **Ints 0–1** \[0xf3f2f1f0,0xf7f6f5f4\] - **Long 0** \[0xf7f6f5f4f3f2f1f0\] Summary of Compound Types in C - **Arrays** - Contiguous allocation of memory - Aligned to satisfy every element’s alignment requirement - Pointer to first element - No bounds checking - **Structures** - Allocate bytes in order declared - Pad in middle and at end to satisfy alignment - **Unions** - Overlay declarations - Way to circumvent type system Summary - Memory Layout - Buffer Overflow - Vulnerability - Protection - Code Injection Attack - Return Oriented Programming - Unions Exploits Based on Buffer Overflows - **Buffer overflow bugs can allow remote machines to execute arbitrary code on victim machines** - Distressingly common in real programs - Programmers keep making the same mistakes 😞 - Recent measures make these attacks much more difficult - **Examples across the decades** - Original “Internet worm” (1988) - “IM wars” (1999) - Twilight hack on Wii (2000s) - ... and many, many more - **You will learn some of the tricks in attacklab** - Hopefully to convince you to never leave such holes in your programs!! Example: the original Internet worm (1988) - Exploited a few vulnerabilities to spread - Early versions of the finger server (fingerd) used `gets()` to read the argument sent by the client: - `finger droh@cs.cmu.edu` - Worm attacked fingerd server by sending phony argument: - `finger "exploit-code padding new-return-address" - exploit code: executed a root shell on the victim machine with a direct TCP connection to the attacker. - Once on a machine, scanned for other machines to attack - invaded ~6000 computers in hours (10% of the Internet 😊) - see June 1989 article in *Comm. of the ACM* - the young author of the worm was prosecuted... - and CERT was formed... still homed at CMU Example 2: IM War - **July, 1999** - Microsoft launches MSN Messenger (instant messaging system). - Messenger clients can access popular AOL Instant Messaging Service (AIM) servers IM War (cont.) - **August 1999** - Mysteriously, Messenger clients can no longer access AIM servers - Microsoft and AOL begin the IM war: - AOL changes server to disallow Messenger clients - Microsoft makes changes to clients to defeat AOL changes - At least 13 such skirmishes - What was really happening? - AOL had discovered a buffer overflow bug in their own AIM clients - They exploited it to detect and block Microsoft: the exploit code returned a 4-byte signature (the bytes at some location in the AIM client) to server - When Microsoft changed code to match signature, AOL changed signature location Date: Wed, 11 Aug 1999 11:30:57 -0700 (PDT) From: Phil Bucking <philbucking@yahoo.com> Subject: AOL exploiting buffer overrun bug in their own software! To: rms@pharlap.com Mr. Smith, I am writing you because I have discovered something that I think you might find interesting because you are an Internet security expert with experience in this area. I have also tried to contact AOL but received no response. I am a developer who has been working on a revolutionary new instant messaging client that should be released later this year. ... It appears that the AIM client has a buffer overrun bug. By itself this might not be the end of the world, as MS surely has had its share. But AOL is now *exploiting their own buffer overrun bug* to help in its efforts to block MS Instant Messenger. ... Since you have significant credibility with the press I hope that you can use this information to help inform people that behind AOL's friendly exterior they are nefariously compromising peoples' security. Sincerely, Phil Bucking Founder, Bucking Consulting philbucking@yahoo.com It was later determined that this email originated from within Microsoft! Aside: Worms and Viruses ■ **Worm: A program that** ▪ Can run by itself ▪ Can propagate a fully working version of itself to other computers ■ **Virus: Code that** ▪ Adds itself to other programs ▪ Does not run independently ■ **Both are (usually) designed to spread among computers and to wreak havoc**
{"Source-Url": "http://www.cs.cmu.edu/~213/lectures/09-machine-advanced.pdf", "len_cl100k_base": 8907, "olmocr-version": "0.1.53", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 88751, "total-output-tokens": 11031, "length": "2e13", "weborganizer": {"__label__adult": 0.00042891502380371094, "__label__art_design": 0.00038504600524902344, "__label__crime_law": 0.002758026123046875, "__label__education_jobs": 0.0011425018310546875, "__label__entertainment": 0.00012981891632080078, "__label__fashion_beauty": 0.00017273426055908203, "__label__finance_business": 0.0003333091735839844, "__label__food_dining": 0.0003228187561035156, "__label__games": 0.0014934539794921875, "__label__hardware": 0.00496673583984375, "__label__health": 0.0004730224609375, "__label__history": 0.00023496150970458984, "__label__home_hobbies": 0.0001494884490966797, "__label__industrial": 0.0005879402160644531, "__label__literature": 0.00024056434631347656, "__label__politics": 0.0004029273986816406, "__label__religion": 0.0003757476806640625, "__label__science_tech": 0.06353759765625, "__label__social_life": 0.00012153387069702148, "__label__software": 0.033721923828125, "__label__software_dev": 0.88720703125, "__label__sports_fitness": 0.0002703666687011719, "__label__transportation": 0.00044846534729003906, "__label__travel": 0.00013840198516845703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22295, 0.08721]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22295, 0.15644]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22295, 0.63959]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 140, false], [140, 223, null], [223, 710, null], [710, 1148, null], [1148, 1456, null], [1456, 2065, null], [2065, 2148, null], [2148, 2564, null], [2564, 3311, null], [3311, 3743, null], [3743, 4263, null], [4263, 4661, null], [4661, 5359, null], [5359, 5653, null], [5653, 5953, null], [5953, 6595, null], [6595, 7036, null], [7036, 7358, null], [7358, 7926, null], [7926, 8507, null], [8507, 8699, null], [8699, 8851, null], [8851, 9024, null], [9024, 9313, null], [9313, 9887, null], [9887, 10208, null], [10208, 10581, null], [10581, 11590, null], [11590, 11954, null], [11954, 12298, null], [12298, 12872, null], [12872, 13033, null], [13033, 13248, null], [13248, 13379, null], [13379, 14167, null], [14167, 14227, null], [14227, 14310, null], [14310, 14805, null], [14805, 15120, null], [15120, 15565, null], [15565, 16379, null], [16379, 16818, null], [16818, 17253, null], [17253, 17718, null], [17718, 18202, null], [18202, 18574, null], [18574, 18717, null], [18717, 19281, null], [19281, 19998, null], [19998, 20184, null], [20184, 20824, null], [20824, 21981, null], [21981, 22295, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 140, true], [140, 223, null], [223, 710, null], [710, 1148, null], [1148, 1456, null], [1456, 2065, null], [2065, 2148, null], [2148, 2564, null], [2564, 3311, null], [3311, 3743, null], [3743, 4263, null], [4263, 4661, null], [4661, 5359, null], [5359, 5653, null], [5653, 5953, null], [5953, 6595, null], [6595, 7036, null], [7036, 7358, null], [7358, 7926, null], [7926, 8507, null], [8507, 8699, null], [8699, 8851, null], [8851, 9024, null], [9024, 9313, null], [9313, 9887, null], [9887, 10208, null], [10208, 10581, null], [10581, 11590, null], [11590, 11954, null], [11954, 12298, null], [12298, 12872, null], [12872, 13033, null], [13033, 13248, null], [13248, 13379, null], [13379, 14167, null], [14167, 14227, null], [14227, 14310, null], [14310, 14805, null], [14805, 15120, null], [15120, 15565, null], [15565, 16379, null], [16379, 16818, null], [16818, 17253, null], [17253, 17718, null], [17718, 18202, null], [18202, 18574, null], [18574, 18717, null], [18717, 19281, null], [19281, 19998, null], [19998, 20184, null], [20184, 20824, null], [20824, 21981, null], [21981, 22295, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22295, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22295, null]], "pdf_page_numbers": [[0, 0, 1], [0, 140, 2], [140, 223, 3], [223, 710, 4], [710, 1148, 5], [1148, 1456, 6], [1456, 2065, 7], [2065, 2148, 8], [2148, 2564, 9], [2564, 3311, 10], [3311, 3743, 11], [3743, 4263, 12], [4263, 4661, 13], [4661, 5359, 14], [5359, 5653, 15], [5653, 5953, 16], [5953, 6595, 17], [6595, 7036, 18], [7036, 7358, 19], [7358, 7926, 20], [7926, 8507, 21], [8507, 8699, 22], [8699, 8851, 23], [8851, 9024, 24], [9024, 9313, 25], [9313, 9887, 26], [9887, 10208, 27], [10208, 10581, 28], [10581, 11590, 29], [11590, 11954, 30], [11954, 12298, 31], [12298, 12872, 32], [12872, 13033, 33], [13033, 13248, 34], [13248, 13379, 35], [13379, 14167, 36], [14167, 14227, 37], [14227, 14310, 38], [14310, 14805, 39], [14805, 15120, 40], [15120, 15565, 41], [15565, 16379, 42], [16379, 16818, 43], [16818, 17253, 44], [17253, 17718, 45], [17718, 18202, 46], [18202, 18574, 47], [18574, 18717, 48], [18717, 19281, 49], [19281, 19998, 50], [19998, 20184, 51], [20184, 20824, 52], [20824, 21981, 53], [21981, 22295, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22295, 0.13107]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0ac9bb8eb11131bcbf227cf722126781cbd6047a
CRSL: A Language for Classificatory Problem Solving and Uncertainty Handling The ability to map the state of an object into a category in a classification hierarchy has long been an important part of many fields, for example, biology and medicine. Recently, AI research has focused increasing attention on classification (Gomez & Chandrasekaran, 1981; Weiss & Kulikowski, 1984; Clancey, 1985; Cohen et al. 1985; and Gordon and Shortliffe, 1985), and has been especially concerned with applying classification to diagnostic problems. One of the problems in classification is that the relationship between observable evidence and categories is often ambiguous. A piece of evidence can be associated with several categories or can occur with a category in an irregular fashion. As a consequence, uncertainty handling is an important facet of classificatory problem solving. In this article, we present and explain a programming language called CSRL, which is intended to facilitate the construction of knowledge-based systems that combine classificatory reasoning with uncertainty handling. The Need for Special Languages Languages developed within computer science often embody theories and assumptions about how programs and data should be organized. AI theories, however, are not concerned with programs and data per se, but with the organization of knowledge and its use. The problem for AI languages is transforming AI theories into symbolic structures. This pattern can be seen in knowledge representation (for example, semantic nets and KL-ONE [Brachman and Schmolze, 1985]) and in knowledge-based programming (for example, knowledge sources and OPM [Hayes-Roth, 1985]). Motivations. Why develop "yet another language"? Our desire to transform a particular theory into a language is motivated by the following needs. Although nearly all programming languages are adequate at the symbol level (being equivalent to Turing machines), the interpretation of their symbols is relatively unconstrained. Although the constructs of these languages can be used to enforce internal consistency of symbols and symbol structures, there is little restriction on the external meanings of symbols. The differences between the language and the real world must be covered by software engineering and programming ability. Many AI languages do not provide constructs that make the organization of the problem solving explicit. For example, RL (McDermott, 1982), which is implemented in OPS5 (Forgy, 1981), performs a sequence of design subtasks, each of which is implemented as a set of production rules. However, OPS5 has no organizing construct larger than a single rule, so the grouping of rules and the sequencing from one set of rules to another are achieved by Abstract In this article, we present a programming language for expressing classificatory problem solvers. CSRL (Conceptual Structures Representation Language) provides structures for representing classification trees, for navigating within those trees, and for encoding uncertainty judgments about the presence of hypotheses. We discuss the motivations, theory, and assumptions that underlie CSRL. Also, some expert systems constructed with CSRL are briefly described. programming techniques. The point is not that it was difficult or unnatural to use OPS5 for R1, but that R1’s method of design problem solving and OPS5’s production-rule mechanism are at different levels of organization and problem solving. If the explanation of R1’s problem solving were to be automated, knowledge about OPS5 would not be enough. Clancey (1981) noted a similar problem with MYCIN and its rules when he developed a program for explaining MYCIN’s knowledge. Both the generality and the mismatch of organization make it difficult to judge the sorts of problems that these languages can naturally solve. The gap between the languages and the problems is simply too large. **CSRL for Classification** The AI community needs to develop languages that are specific to particular ways of organizing knowledge and problem solving. Such languages will be powerful when the problems of a domain match the capabilities of the languages. CSRL is intended to be such a language for developing classificatory problem solvers. Although CSRL doesn’t eliminate the need for significant amounts of knowledge engineering, it does provide constructs that encode classificatory knowledge in a variety of ways—from the hypotheses in the classification tree to rules that match on the situation being classified. Because it is not a general programming language, CSRL should not be viewed as a total solution for developing expert systems but as one of the building blocks for constructing any single expert system. For illustration, we discuss CSRL’s role in some expert systems, as well as its general role in diagnostic problem solving, looking at its relationship to other strategies that are diagnostically useful. **Generic Tasks** CSRL’s theoretical base comes from Chandrasekaran (1983, 1985), who proposes that expert problem solving relies on a number of elementary organizational and information-processing strategies, which are called generic tasks. Each generic task has a particular kind of conceptual organization and a set of problem-solving strategies which take advantage of that organization. The idea is to model an expert as several problem-solving structures, where each structure performs a generic task, and all the structures cooperate to solve the problems presented to them. The word “task” might be misleading here. The definition of a problem does not directly specify what generic task is appropriate for it, rather a generic task is a strategy that can be adopted for organizing domain knowledge for a specific type of problem solving. More than one generic task might be appropriate for a problem if the domain knowledge can be adapted to the requirements of each generic task. **Classificatory Problem Solving** CSRL is intended to express problem solvers of one generic task, called classification, which is finding the categories or hypotheses within a classification hierarchy that apply to the situation being analyzed. CSRL generalizes the diagnostic problem solving of MDX, an expert system in the medical domain of cholestasis (the lack of bile flow from liver to intestine) (Chandrasekaran et al. 1979; Mittal, 1980; Gomez and Chandrasekaran, 1981; Chandrasekaran and Mittal, 1983). MDX explicitly organizes and uses knowledge in a way that applies to classification in general. Recently, Clancey (1984, 1985) pointed out that a number of AI systems, diagnostic and nondiagnostic, can be described as performing classification. The work on MDX and CSRL lends more weight to Clancey’s historical observation but provides a somewhat different framework for analyzing classificatory problem solving. These differences are discussed in a later section. MDX has a number of interesting features that, not surprisingly, define the design goals of CSRL: **Explicit Classification Hierarchy.** Disease hypotheses are organized as a classification tree in which the children of a node represent subhypotheses of the parent. Figure 1 illustrates a fragment of the MDX tree. **Local Decision Criteria.** The responsibility for calculating the uncertainty of hypotheses and for directing “attention” to and from hypotheses is distributed over the nodes. For example, the procedure for calculating the degree of certainty in cholestasis would be located in the cholestasis node. In addition, the procedure(s) for directing attention away from cholestasis to related hypotheses would also be located there. For this reason and also to make an analogy with the organization of the medical community, these nodes are called specialists. In this --- 1This is not to say that all classification is hierarchical but that classification hierarchies are associated with strategies that do not apply to other forms of knowledge organization. metaphor, “directing attention” is realized as transfer of control between specialists. Establish, Refine. Transfer of control is primarily accomplished through a type of hypothesis refinement called establish-refine. Simply put, a specialist that confirms its hypothesis (the establish part) invokes its subspecialists (the refine part). A specialist that rules out or rejects its hypothesis also rules out all of the subhypotheses. At other levels of certainty, the specialist relies on domain knowledge and the context of the problem solving to determine what to do next. For systematic search, it is natural to initially give control to the root specialist. The MDX specialists also contain suggestion rules that allow the specialists to perform a data-directed search. Symbolic Uncertainty Calculation. Calculation of uncertainty is not done by a numerically based calculus but by directly encoding the symbolic judgment of domain experts. We have more to say about this later. MDX represents a general approach to the classification problem by organizing domain knowledge along the classification hierarchy and integrating it with the establish-refine strategy. In contrast to approaches that make a strict separation between domain knowledge and strategic knowledge, the architecture of MDX permits domain knowledge to directly influence strategic decisions. This approach explicitly recognizes the need for domain knowledge to guide the problem solving as well as the need for domain knowledge to be adapted to the problem-solving strategy. CSRL CSRL is a language for representing the specialists of a classification hierarchy and the knowledge within them. Classificatory knowledge is encoded at various levels of abstraction in addition to the nodes in the hierarchy. Message procedures describe a specialist’s behavior in response to messages from other specialists. These contain the knowledge about how to establish or refine a specialist. Knowledge groups are primarily used for uncertainty handling. Knowledge groups are composed of rule-like knowledge that matches the data against specific patterns and, when successful, determines the value of the knowledge group. In the following discussion, we use the classification tree displayed in Figure 2, which is taken from the Auto-Mech expert system (Tanner and Bylander, 1985). Encoding of Classification Trees In CSRL a classification system is implemented by individually defining each specialist. The superspecialists and subspecialists of the specialist are declared within the definition. Figure 3 is a skeleton of a specialist definition for the bad fuel problems node from Figure 2. The declare section specifies its relationships to other specialists. The other sections of the specialist are examined later. Because CSRL is designed to use only a simple classification tree, many choices concerning the composition of the hierarchy must be made. This is a pragmatic decision rather than a search for the “perfect” classification tree. The main criterion for evaluating a classification is whether enough evidence is normally available to make confident decisions. To decompose a specialist into its subspecialists, the simplest method is to ask the domain expert what subhypotheses should be considered next. The subhypotheses should be subtypes of the specialist’s hypothesis and usually differ from one another based on a single attribute (for example, location, cause). For further discussion on the design of classification trees, see Mittal (1980) and Bylander and Smith (1985). Encoding of Local Decision Criteria The messages section of a specialist contains a list of message procedures, that specify how the specialist will respond to different messages from its superspecialist.2 2A specialist is not allowed to send messages to its superspecialist. However, other message-passing routes are allowed. Specifically, a specialist can send a message to itself, across the hierarchy, and to indirect subspecialists. In the latter case, each interconnected specialist is sent a suggest message and decides within its suggest message procedure whether to pass the original message downward Establish-refine (combines establish and refine), and suggest are predefined messages in CSRL; additional messages can be defined by the user. Later we examine how establish and refine procedures are typically constructed. Message procedures are the highest level of abstraction for classificatory knowledge within specialists. Just as in general message-passing languages, messages provide a way to invoke a particular kind of response without having to know what procedure to invoke and allow the receiver of the message (the specialist in this case) to call upon local knowledge to make its decisions. However, the important thing about message passing in CSRL is not that it’s useful as a general programming style but that the organization of problem solving can be modeled by identifying certain kinds of messages with specific meaning. **Encoding of the Establish-Refine Protocol** The establish message procedure of a specialist determines the confidence value in the specialist’s hypothesis. Figure 4 illustrates the establish message procedure of the BadFuel specialist. relevant and summary are names of knowledge groups of BadFuel. self is a keyword that refers to the name of the specialist. This procedure first tests the value of the relevant knowledge group. (If this knowledge group has not already been evaluated, it is automatically evaluated at this point.) If it is greater than or equal to 0, then BadFuel’s confidence value is set to the value of the summary knowledge group; if not, it is set to the value of the relevant knowledge group. In CSRL a confidence value scale of -3 to +3 is used (integers only). A value of +2 or +3 indicates that the specialist is established. In this case, the procedure corresponds to the following classificatory knowledge: First perform a preliminary check to make sure that BadFuel is a relevant hypothesis to hold. If it is not (the relevant knowledge group is less than 0), then set BadFuel’s confidence value to the degree of relevancy. Otherwise, perform more complicated reasoning (the summary knowledge group combines the values of other knowledge groups) to determine BadFuel’s confidence value. ```plaintext (Establish (if (GE relevant 0) then (SetConfidence self summary) else (SetConfidence self relevant)) Establish Procedure of BadFuel Figure 4 ``` The refine message procedure determines what subspecialists should be invoked and the messages they are sent. Figure 5 shows a refine procedure which is a simplified version of the one that BadFuel uses. subspecialists is a keyword that refers to the subspecialists of the current specialist. The procedure calls each subspecialist with an establish message. If the subspecialist establishes itself (+? tests if the confidence value is +2 or +3). Then the subspecialist it is sent a refine message. ```plaintext (Refine (for specialist in subspecialists do (Call specialist with Establish (if (+? specialist) then (Call specialist with Refine))))) Example Refine Procedure Figure 5 ``` CSRL also has a facility for specifying suggestion rules in specialists. This is done by implementing “suggestion” knowledge groups, which are elements of the kgs section of the specialist definition. Figure 6 is a suggestion knowledge group that might be part of the FuelMixture specialist (Auto-Mech itself does not use any suggestion rules). The then part of each rule indicates the specialist(s) that the rule’s condition suggests. The value of the knowledge group is the list of specialists that are associated with the rules whose conditions are true. The knowledge group corresponds to the following knowledge: A fuel mixture problem should be initially considered if the car’s problem occurs only when the engine is hot or cold. Knocking or pinging sounds suggest that the fuel is bad. ```plaintext (fuelProblemSuggestions Suggestion (if (Or (Ask-YNU? ‘Does the problem occur only when the engine is cold’) (Ask-YNU? ‘Does the problem occur only when the engine is hot’)) then FuelMixture) (if (Ask-YNU? ‘Do you hear knocking or pinging sounds’) then BadFuel)) Example Suggestion Knowledge Group Figure 6 ``` To use suggestion rules, the default refine procedure (Figure 5) must be modified. Figure 7 illustrates how this can be done. The modification is the addition of another loop that invokes the suggested specialists with an establish message, conditionally followed by a refine message. The second loop is nearly the same as the default procedure except that it avoids reinvoking any of the suggested specialists. --- 3For convenience, many of the CSRL control constructs mimic those of INTERLISP; however, these constructs are executed by the CSRL interpreter, not by LISP. LISP code is allowed within message procedures, but only within a construct called DoLisp. This is intended to allow interaction with other LISP-implemented systems. CSRL has a variety of other kinds of statements and expressions so that more complicated strategies can be implemented. For example, a Reset statement deletes the confidence value and the knowledge group values of a specialist. This might be used when additional tests are performed, making it necessary to recalculate the confidence value. Also, messages can be parameterized, and message procedures can declare local variables. If the car is slow to respond or if the car starts hard, then BadFuel is not relevant in this case. Otherwise, if there are knocking or pinging sounds and if the problem occurs while accelerating, then BadFuel is highly relevant. In all other cases, BadFuel is only mildly relevant. --- ### Encoding of Symbolic Uncertainty The other knowledge groups in the kgs section are used to implement uncertainty handling. Each of these CSRL knowledge groups is intended to correspond to an evidential abstraction underlying the hypothesis. A knowledge group can be thought of as a cluster of production rules that map the values of a list of expressions (boolean and arithmetic operations on data, values of other knowledge groups) to some conclusion on a discrete, symbolic scale. As an example, Figure 8 is the relevant knowledge group of the BadFuel specialist mentioned earlier. It determines whether the symptoms of the automobile are consistent with bad fuel problems. The expressions query the user (who is the database for Auto-Mech) about whether the car is slow to respond, starts hard, has knocking or pinging sounds, or has the problem when accelerating. AskYNu? is a LISP function that asks the user for a Y, N, or U (unknown) answer from the user and translates the answer into T, F, or U, the values of CSRL's three-valued logic. Each set of tests in the with part of the knowledge group is evaluated until one matches. The value corresponding to the rule that matches becomes the value of the knowledge group. For example, the first rule in the figure tests whether the first expression is true (the ? means doesn't matter). If so, then -3 becomes the value of the knowledge group. Otherwise, subsequent rules are evaluated. The value of the knowledge group will be 1 if no rule matches. This knowledge group encodes the following matching knowledge: - If the car is slow to respond or if the car starts hard, then BadFuel is not relevant in this case. - Otherwise, if there are knocking or pinging sounds and if the problem occurs while accelerating, then BadFuel is highly relevant. - In all other cases, BadFuel is only mildly relevant. --- ### Relevant Knowledge Group of BadFuel #### Figure 7 ``` (Refine (for specialist in fuelProblemSuggestions do (Call specialist with Establish) (if (+? specialist) then (Call specialist with Refine))) (for specialist in subspecialists do (if (Not (Member? specialist fuelProblemSuggestions)) then (Call specialist with Establish) (if (+? specialist) then (Call specialist with Refine)))) Refine Procedure that Uses Suggestion Rules ``` ### Relevant Knowledge Group of BadFuel #### Figure 8 #### Relevant Table ``` (match (AskYNu? "Is the car slow to respond") (AskYNu? "Does the car start hard") (And (AskYNu? "Do you hear knocking or pinging sounds") (AskYNu? "Does the problem occur while accelerating") with (if T ? ? then -3 elseif ? T ? then -3 elseif ? ? T then 3 else 1)) relevant Knowledge Group of BadFuel ``` --- ### Summary Knowledge Group of BadFuel #### Figure 9 #### Summary Table ``` (match relevant gas with (if 3 (GE 0) then 3 elseif 1 (GE 0) then 2 elseif ? (LT 0) then -3)) summary Knowledge Group of BadFuel ``` --- This method of evidence combination allows the calculation of uncertainty to be hierarchically organized. In this instance, the hierarchy (illustrated in Figure 10) is very simple. For a more complex evaluation, additional knowledge groups, hierarchy layers, and pattern combinations can be defined as needed. The fuel system is to deliver a mixture of fuel and air to the air cylinders of the engine. It can be divided into major subsystems (fuel delivery, air intake, carburetor, vacuum manifold) that correspond to initial hypotheses about fuel system faults. Auto-Mech consists of 34 CSRL specialists in a hierarchy whose depth varies from four to six levels. Its problem solving closely follows the establish-refine strategy. Before this strategy is invoked, Auto-Mech collects some initial data from the user. This includes the major symptom that the user notices (such as stalling) and the situation when this occurs (for example, accelerating and cold engine temperature). Any additional questions are asked while Auto-Mech's specialists are running. The diagnosis then starts and continues until the user is satisfied that the diagnosis is complete. The user must make this decision because the data that Auto-Mech uses are very weak at indicating specific problems and, more importantly, because Auto-Mech is unable to make the repair and determine whether the problem has been fixed. A major part of Auto-Mech's development was determining the assumptions that would be made about the design of the automobile engine and the data the program would be using. Different automobile engine designs have a significant effect on the hypotheses that are considered. A carbureted engine, for example, will have a different set of problems than a fuel-injected engine (the former can have a broken carburetor). The data were assumed to come from commonly available resources. The variety of computer analysis information that is available to mechanisms today was not considered in order to simplify building Auto-Mech. Red Red is an expert system whose domain is red blood cell antibody identification (Smith et al 1985). An everyday problem that a blood bank contends with is the selection of units of blood for transfusion during major surgery. The primary difficulty is that antibodies in the patient's blood can attack the foreign blood, rendering the new blood useless as well as presenting additional danger to the patient. Thus, identifying the patient's antibodies and selecting blood that will not react with them is a critical task for nearly all red blood transfusions. The Red expert system is composed of three major subsystems, one of which is implemented in CSRL. The non-CSRL subsystems are a database that maintains and answers questions about reaction records (reactions of the patient's blood in selected blood samples under a variety of conditions) and an overview system which assembles a composite hypothesis of the antibodies that would best explain the reaction record (Josephson et al 1984). CSRL is used to implement specialists corresponding to each antibody that Red knows about (about 30 of the most common... ones) and to each antibody subtype (different ways that the antibody can react). The major function of the specialists is to rule out antibodies and their subtypes whenever possible, thus simplifying the job of the overview subsystem, and to assign confidence values, informing the overview subsystem of the plausibility of each antibody. The specialists query the database for information about the test reactions and other patient information and also tell the database to perform certain operations on reaction records. An interesting feature of Red is how it handles the problem of interacting hypotheses. It is possible for the patient's blood to have practically any number or combination of antibodies, making it very hard for a single specialist to determine how well it will fit with other specialists in a composite hypothesis. In Red each specialist is encoded to assume that it might only partially account for the data; it doesn't reduce its confidence value if there is extra data to account for. The knowledge of how the specialists can interact is left to the overview subsystem. This would be problematic if few specialists could rule themselves out, but it turns out that it is rare to have more than a few antibodies that cannot be independently ruled out. Thus, Red's CSRL subsystem makes the overview's problem solving more manageable because it considerably reduces the amount of search that would otherwise be necessary. Real-World Use of CSRL CSRL is being used to develop two commercial systems by the Knowledge-Based Systems group at the Battelle Columbus Institute. WELDEX (Mahalingam and Sharma, 1985) and ROMAD (Mahalingam et al. 1985) are diagnostic systems for detecting welding defects and evaluating machinery, respectively. A brief description of WELDEX follows. WELDEX identifies possible defects in a weld from radiographic data on the weld. Industry standards and regulations require careful inspection of the entire weld and a very high level of quality control. Thus, for industries that rely on welding technology, such as the gas pipeline industry, radiographic inspection is a tedious, time-consuming, and expensive part of their operation. This problem can be decomposed into two tasks: visual processing of the radiograph to extract relevant features of the weld and mapping these visual features to the welding defects that give rise to them. WELDEX is intended to perform the second task. The current prototype consists of 25 CSRL specialists that are organized around different regions of the weld, taking advantage of the fact that each class of defects tends to occur in a particular region. The knowledge groups in these specialists concentrate on optical contrast, shape, size, and location of the radiographic features. A customer version of WELDEX is currently being developed. Future work is anticipated on developing a visual-processing system whose output would be processed by WELDEX, thus automating both parts of the radiographic inspection problem. Weaknesses of CSRL CSRL lacks a good set of classification control primitives for refine message procedures, relying instead on general programming constructs. The result is that it is hard to encode refine procedures in CSRL that combine different priorities, such as invoking suggested specialists before those not suggested, invoking more common specialists before more rare specialists, and refining specialists with higher confidence values before those with lower confidence values. These procedures should also respond to priorities that the superspecialist requests. Additional work is needed to create a set of constructs that more closely match the kinds of operations that are needed for refinement and for cooperative action. Another problem is the ambiguity of the -3 to +3 confidence scale.4 For example, +3 was used to mean near certainty in MDX, highly relevant to consider in Auto-Mech, and highly plausible in Red. Future versions of CSRL will adopt more meaningful symbols for confidence values, such as certain, likely, plausible, problematic, and ruled-out. A major missing piece in CSRL is that there is no strategy that determines when classification should stop. Currently, the default procedures simply ask the user if the current solution is satisfactory. From work on Auto-Mech and Red, it appears that this decision depends strongly on the goals of whoever uses the CSRL system. In Auto-Mech, the classification should stop when a repair is successful; in Red it should stop when a "best explanation" is reached. The work on Red's overview system is a step in this direction for diagnostic systems, but there needs to be more cooperation between the overview subsystem and CSRL (currently the overview subsystem starts after the specialists are finished) and a better understanding of what kinds of interactions can occur between two hypotheses. Progress in this area would also help increase the focus of the diagnosis; that is, the diagnosis could concentrate on accounting for the most important manifestation(s). Issues of Classification In this section, we further explain our positions on a few issues that influence the design of CSRL. The Relationship of Classification to Diagnosis The work on MDX demonstrated that classification is an important strategy in medical diagnosis. Because of this, we speculated that classificatory problem solving was the 4The use of numbers also created the false impression that these confidence values could be added and subtracted. primary component of diagnosis in general, and consequently, what we now call the classificatory task had been called the diagnostic task. However, we have come to realize that diagnosis is a more complex phenomenon in which other problem solvers have major roles, and, depending on the domain, classification can play a minor role. A useful companion to the classification hierarchy is a problem solver that performs the generic task of knowledge-directed data retrieval (Mittal et al. 1984). This problem solver can be thought of as an intelligent database that organizes the case description, answers queries from the classification specialists, and makes simple inferences from the data. For example, an intelligent database should be able to infer exposure to anesthetics if the patient has had major surgery or has been exposed to halothane (a type of anesthetic). The classificatory specialists are then relieved from knowing how one datum could be inferred based on its conceptual relationships to other data. In some applications, such as diagnosing devices that are heavily instrumented with sensors (for example, nuclear power plants), there is almost a direct match between data and malfunctions. In these cases, the intelligent database performs the work of interpreting raw data, and the classificatory problem solving mainly consists of simply associating the processed data with hypotheses. Another important part of diagnosis is accounting for abnormal findings and producing composite hypotheses when one malfunction cannot account for all of them. Although these capabilities have not been needed in some systems (such as MDX and MYCIN), they appear to be necessary in other systems and domains (for example, INTERNIST-I whose domain is internal medicine [Pople, 1977] and ABEL whose domain is acid-base disorders [Patil et al. 1981]). Josephson et al. (1984) have proposed a generic task called hypothesis assembly to perform these actions in coordination with a classificatory problem solver. Roughly, the classifier generates plausible hypotheses and determines the data they can account for. The hypothesis assembler builds and critiques composite hypotheses, taking into account interactions between hypotheses. The Red system discussed earlier implements this combination of problem solvers. There are several other issues relevant to diagnostic problem solving that need to be considered, such as test ordering, causal explanation of findings, and therapeutic action. Fully resolving all of these issues and integrating their solutions into this diagnostic framework are problems for future research. Tangled Hierarchies CSRL, as previously noted, does not allow the representation of tangled hierarchies (hierarchies in which some nodes have more than one parent) and, consequently, is restricted to representing classification trees. Although tangled hierarchies are completely general, trees represent a simpler and cleaner solution for classification. In our experience, tangled hierarchies can often be untangled by a judicious mix of eliminating nodes that provide little classificatory power and carefully analyzing the domain. For example, consider the tangled hierarchy in Figure 11. Although infection and viral infection are concepts that physicians use, they are not very useful for classificatory problem solving because there is little evidence that distinguishes infections and viral infections from other possibilities. Also, one might make hepatitis (liver inflammation) a subspecialist of cholestasis because hepatitis often causes cholestasis. However, hepatitis is not a subtype of cholestasis, and, furthermore, hepatitis often does not cause cholestasis; thus hepatitis should not be ruled out if cholestasis is ruled out. Figure 12 illustrates an untangled version of this hierarchy. The infection nodes are eliminated because of their weaknesses. A new node, “cholestasis caused by hepatitis,” is added to distinguish this cause of cholestasis from other causes. This new node is one kind of composite hypothesis that can be embedded in a classificatory structure. The problem-solving issues of how this hypothesis can take advantage of decisions concerning the hepatitis hypothesis are discussed in Gomez and Chandrasekaran (1981). --- **Figure 11** A Tangled Hierarchy --- Another advantage of trees over tangled hierarchies is that the knowledge of a specialist can be biased to the context of its superspecialist. Because of the establish-refine strategy, the specialist's knowledge can assume that the superspecialist has been established. This simplifies the amount of knowledge that needs to be encoded because decisions can be made in a single context, eliminating the caveats needed to encode other contexts. Also, the specialist can take advantage of knowledge that distinguishes... the specialist from its siblings. However, there is a danger involved—if the superspecialist produces overconfident decisions, the biasing of knowledge can cause the specialist to also be overconfident. This should not be taken as an argument that tangled hierarchies are never necessary but that they can often be simplified without loss of problem-solving power. There is a need to discover and investigate those situations in which the complexities of tangled hierarchies pay off in increased problem-solving ability. Uncertainty in Classification Uncertainty handling in CSRL is an outgrowth of the technique used in MDX (Chandrasekaran et al. 1982). This technique is based on a number of assumptions: Uncertainty is not a unitary phenomenon. Much of the debate within AI treats uncertainty as if it were a single phenomenon. However, a clear distinction needs to be made between uncertainty as degree of chance versus uncertainty as degree of fit. Rather than assessing the probabilities of hypotheses, the method of uncertainty handling incorporated within CSRL measures the qualitative degree of fit between hypotheses and data. It is interesting to note that this viewpoint has recently been supported by Cohen et al. (1985), who cogently argue that degree of fit (called representativeness in their paper) is an appropriate measure of uncertainty in classification. Uncertainty introduced by the problem solver should be kept to a minimum. In AI uncertainty-handling techniques can introduce uncertainty by making assumptions that are not true of the domain and by requiring knowledge that cannot be obtained, forcing implementors to make guesses. In the case of expert systems, it is important that the expert’s reasoning is accurately modeled. Symbolic uncertainties should be used for expert system reasoning. This and the previous paragraph seem to be contradictory because symbolic uncertainties appear to introduce uncertainty from the start. However, we need to remember that these judgments are coming from people, who simply don’t have specific fractions in their heads. Typically, knowledge engineers translate the expert’s symbolic judgments into some numeric uncertainty calculus, assume that the methods of inference combination will correspond to the expert’s reasoning, and then translate the numbers back into the symbolic scale. Each of these steps introduces uncertainty because they move from the expert’s reasoning methods to an artificial system that the expert doesn’t use. It would make more sense to use the expert’s judgment not only for the “single” inferences, but for the combination of inferences as well. Our methodology, which we call hypothesis matching, allows for a hierarchical evidence structure in which the expert provides symbolic judgments on combinations of data as well as on how these judgments should be combined to determine the confidence in the hypothesis. The following scenario illustrates how the hierarchical organization can be naturally formed from the data. Suppose that we want to measure the evidence for a particular bacterial infection called X. It is likely that a large amount of data is potentially useful as evidence for or against X. It would be a combinatorial nightmare to match all combinations of data to a judgment on X’s presence. However, these data are not evidentially unrelated to one another, but subsets of them are evidence about particular parts of the infection process. One thing we probably want to know is whether the patient has been exposed to the bacteria that causes X. Usually, exposure to bacteria is not a datum given to the system; instead, there are several data that give indications about exposure. For example, open wounds, places the patient has been, what the patient has been eating or drinking, and other activities of the patient are all factors that need to be taken into account. Now we can ask the expert how these data are evidence for exposure to X bacteria. The left half of Figure 13 illustrates this part of the evidence combination. For other data, we ask how they relate to susceptibility, incubation, various effects of the infection, and so forth. After a judgment is made on each of these features of X, we can then ask how combinations of these abstractions (at various levels of uncertainty) relate to X itself. This is illustrated in the right half of Figure 13. As this scenario indicates, the methodology of hypothesis matching is very simple—data are combined into judgments on evidential abstractions underlying the hypothesis, and these judgments are combined, using as many levels as needed, into a judgment on the hypothesis itself. Here are some other points to keep in mind: - Because any number of levels of abstractions is permitted, a large body of evidence can be decomposed and combined in a combinatorially manageable fashion. - These abstractions are not arbitrarily chosen but come from the expert’s understanding of the hypothesis. - Because the expert is providing symbolic judgments at each level of abstraction, there is no need for an uncertainty calculus, and there is no reason to translate into and out of a continuous probability system. It turns out that hypothesis matching is useful for more than classification. Both MDX's database (Mittal, 1980; Mittal et al. 1984) and AIR-CYL (Brown and Chandrasekaran, 1984), a design expert system, used forms of hypothesis matching to map from situations to decisions. Based on these examples and some further analysis, hypothesis matching is now considered to be a generic task. This method of evidence combination closely matches the signature table strategy used by Samuel's checker-playing program (Samuel, 1967). Just as Samuel used layers of signature tables to reduce the complexity of decision making in his program, the layers of evidential abstraction eliminate the need to provide one rule for each combination of data and also make the decision making comprehensible to both the knowledge engineer and the domain expert (Chandrasekaran, 1985). Hypothesis matching allows a simple but powerful debugging technique (see Sticklen et al. [1985b] for more explanation and for another application of hypothesis matching). If the expert system determines the wrong level of uncertainty for a hypothesis, the bug can be traced to the abstractions that produced incorrect answers, which can then be appropriately modified. After many debugging sessions, the system becomes tuned to the domain within the conceptual framework of the domain expert. 5This can be as simple as adding another "rule" to the abstraction or as difficult as completely reorganizing the abstractions. To make our position clear, we do not mean to advocate hypothesis matching for every classification system. Bayesian probabilities, for instance, should be used if quantitative data on probabilities are available and if the assumptions apply. However, when the idealized conditions of uncertainty calculi are not applicable, which they often aren't for expert systems, they become knowledge engineering obstacles instead of knowledge engineering tools. It makes much more sense to use the expert's judgments and conceptual structures whenever possible and to test and refine them by using real cases. **Comparison to Heuristic Classification** Clancey (1984, 1985) has proposed a description of classification, called "heuristic classification," that is "an attempt to specify what many heuristic programs known as 'expert systems' do." Heuristic classification has three parts: - **Data Abstraction:** This relation maps observations into data classes or characterizations based on categorical inferences. Clancey gives the examples of inferring low white blood count from a specific white blood count and immunosuppression from the low white blood count (low white blood count is a form of immunosuppression). - **Heuristic Match:** This relation maps data or data abstractions to hypotheses in the classification hierarchy. This mapping is based on heuristic, but not certain, knowledge. - **Refinement:** This relation indicates what hypotheses are subhypotheses of other hypotheses. The processes of heuristic classification then use these relations to find a solution in the hierarchy. We agree with Clancey that heuristic classification characterizes the problem solving of a considerable number of AI programs and that it is a useful strategy to solve many problems. However, from our perspective, heuristic classification corresponds to a combination of generic tasks rather than being a primitive strategy. This point has practical consequences. Instead of developing a single "knowledge engineering tool designed specifically to perform heuristic classification," our approach leads us to develop separate tools for each generic task and to provide mechanisms that not only allow heuristic classification to be constructed out of several problem solvers but also allow those generic tasks to be used in other ways. Clancey's data abstraction relation, for example, corresponds to the capability of the intelligent database described previously. This capability, however, is not just useful for classification but for other problem-solving methods as well. We can imagine a designer, that is, an agent who constructs a design plan from specifications, needing to know if some material is metallic or if the temperature of the operating environment will be below freezing. Because simple data inference is a useful function for many tasks, and because it requires knowledge organization and problem-solving strategies different from classifi- categorization, it is natural to think of it as a separate reasoning strategy. This separation is more clearly seen in programs that do classification but not data abstraction. For example, the DART diagnostic program (Genesereth, 1984) uses classification to find circuit faults, but instead of using data abstraction it generates tests to confirm or reject faults. Although DART’s problem solving does not fit into the heuristic classification mold, it can be analyzed as a combination of classification problem solving and test generation based on functional knowledge. The heuristic match relation in heuristic classification is intended to account for two types of behavior: suggesting hypotheses and confirming or rejecting hypotheses. Our approach accounts for the first type by embedding suggestion rules in individual specialists. This has the advantage of tailoring suggestion knowledge to the classificatory context. Also, the suggestion rules can operate on patterns of data rather than on just a single datum or data abstraction. The apparent disadvantage of efficiency by not executing the rules when the data are entered can be alleviated by implementing the suggestion rules as demons. Confirming and rejecting hypotheses within CSRL is done by hypothesis matching, a generic task that we have already covered in some detail. Finally, we wish to emphasize that classification is useful for more than selecting a single solution from a hierarchy. We have already mentioned one architecture in which classification and hypothesis assembly can be integrated to produce composite hypotheses. Even without hypothesis assembly, some forms of hypotheses interaction can be reasoned about by a classification problem solver (Sticklen et al. 1985a). **Conclusion** How does CSRL meet the needs that we listed in the introduction? Currently, CSRL makes some small steps toward constraining the meaning of symbols. Because ruling out a specialist results in ruling out its subspecialists, CSRL’s classification tree is more than a simple way to associate hypotheses with one another. The children of a hypothesis must be subhypotheses. The current CSRL implementation does not enforce the meaning of its messages, but given a better analysis of the operations that are needed for each type of message, a message procedure could be appropriately restricted. CSRL makes the organization of classificatory problem solvers explicit by providing appropriate abstractions for classificatory knowledge: specialists, message procedures, knowledge groups, and rules. The expert system implementor is then relieved from the burden of implementing one level of organization in a language at a different level and is free to concentrate on the conceptual structure of the domain. Also, there is a greater potential to embed general explanation facilities that can take advantage of the organization. We believe that CSRL provides a good match to a sizable portion of human expertise. Whenever the solution set of a problem (or subproblem) can be characterized by a classification tree and whenever an expert can provide the judgments that map data to confidence in classificatory hypotheses and the evidential abstractions that underlie them, the classification tree and the judgments can be directly represented in CSRL. From the perspective of the generic task framework, CSRL handles only two (classification and hypothesis matching) of the many types of problem solving that experts use. Besides improving CSRL, special-purpose languages need to be developed for the other generic tasks that have been identified, and these languages need to be integrated for cooperative problem solving. Currently, languages for knowledge-directed data retrieval (IDABLE [Intelligent DAta Base Language]) and another generic task called “plan selection and refinement” (PSRL [Plan Selection and Refinement Language]) (Brown and Chandrasekaran, 1984) have been implemented. In addition, a language for hypothesis assembly is also being developed. This family of languages should be a powerful tool because it will allow implementors to encode different types of problem-solving knowledge, such as that used for diagnosis, in the appropriate form and to integrate the resulting problem solvers into complex knowledge-based systems. For example, note that CSRL together with IDABLE will provide the function of Clancey’s heuristic classification while making IDABLE’s function available for other situations where a data abstraction capability is called for. **References** Chandrasekaran, B. 1986 “From Numbers to Symbols to Knowledge Structures: Pattern Recognition and Artificial Intelligence Perspectives on the Classification Task.” In Pattern Recognition in Practice-II. Amsterdam: North-Holland
{"Source-Url": "http://www.aaai.org/ojs/index.php/aimagazine/article/download/549/485", "len_cl100k_base": 9443, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37024, "total-output-tokens": 10566, "length": "2e13", "weborganizer": {"__label__adult": 0.0003864765167236328, "__label__art_design": 0.0005431175231933594, "__label__crime_law": 0.0005021095275878906, "__label__education_jobs": 0.0017833709716796875, "__label__entertainment": 9.387731552124023e-05, "__label__fashion_beauty": 0.0002810955047607422, "__label__finance_business": 0.0003962516784667969, "__label__food_dining": 0.00058746337890625, "__label__games": 0.0006842613220214844, "__label__hardware": 0.0015430450439453125, "__label__health": 0.00212860107421875, "__label__history": 0.0003120899200439453, "__label__home_hobbies": 0.00023055076599121096, "__label__industrial": 0.0010976791381835938, "__label__literature": 0.0005459785461425781, "__label__politics": 0.00033020973205566406, "__label__religion": 0.0006747245788574219, "__label__science_tech": 0.28759765625, "__label__social_life": 0.00016057491302490234, "__label__software": 0.0171661376953125, "__label__software_dev": 0.681640625, "__label__sports_fitness": 0.00033664703369140625, "__label__transportation": 0.0006084442138671875, "__label__travel": 0.0001964569091796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50235, 0.01254]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50235, 0.84588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50235, 0.92616]], "google_gemma-3-12b-it_contains_pii": [[0, 3229, false], [3229, 7998, null], [7998, 12178, null], [12178, 17162, null], [17162, 21205, null], [21205, 24041, null], [24041, 29569, null], [29569, 34424, null], [34424, 39461, null], [39461, 44070, null], [44070, 50235, null], [50235, 50235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3229, true], [3229, 7998, null], [7998, 12178, null], [12178, 17162, null], [17162, 21205, null], [21205, 24041, null], [24041, 29569, null], [29569, 34424, null], [34424, 39461, null], [39461, 44070, null], [44070, 50235, null], [50235, 50235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50235, null]], "pdf_page_numbers": [[0, 3229, 1], [3229, 7998, 2], [7998, 12178, 3], [12178, 17162, 4], [17162, 21205, 5], [21205, 24041, 6], [24041, 29569, 7], [29569, 34424, 8], [34424, 39461, 9], [39461, 44070, 10], [44070, 50235, 11], [50235, 50235, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50235, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
d79a7e9ea43aef1ff598d4b3d26a0b078bd15829
Building Application-Specific Operating Systems: A Profile-Guided Approach Yuan Pengfei, 郭耀, Zhang Lu, Chen XiangQun and Mei Hong Citation: SCIENCE CHINA Information Sciences; doi: 10.1007/s11432-017-9418-9 View online: http://engine.scichina.com/doi/10.1007/s11432-017-9418-9 Published by the Science China Press Articles you may be interested in Building Application-Specific Operating Systems: A Profile-Guided Approach Pengfei YUAN$^{1,2}$, Yao GUO$^{1,2,*}$, Lu ZHANG$^{1,2}$, Xiangqun CHEN$^{1,2}$ & Hong MEI$^{1,2}$ $^1$Key Laboratory of High-Confidence Software Technologies (Ministry of Education); $^2$School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China Abstract Although operating system optimization has been studied extensively, previous work mainly focuses on solving performance problems. In the cloud era, many servers only run a single application, making it desirable to provide an application-specific operating system (ASOS) that is most suitable for the application. In contrast to existing approaches that build ASOS by manual redesign and reimplementaton, this paper presents Tarax, a compiler-based approach to constructing an ASOS for each application. With profile collected from executing the target application on an instrumented Linux kernel, Tarax recompiles the kernel while applying profile-guided optimizations. Although GCC has already implemented the optimization process that can be applied to user applications, it does not work on the Linux kernel directly. We modify the Linux kernel and GCC to support kernel instrumentation and profile collection. We also modify GCC to reduce the size of optimized kernel images. We conduct experiments on six popular server applications: Apache, Nginx, MySQL, PostgreSQL, Redis and Memcached. Experimental results show that application performance improves by 8.8% on average (up to 16%) on the ASOS. We also perform detailed analysis to reveal how the resulting ASOS improves performance, and discuss future directions in ASOS construction. Keywords Operating system, Linux kernel, Performance, GCC, Profile-guided optimization Citation Yuan P, Guo Y, Zhang L, et al. Building Application-Specific Operating Systems: A Profile-Guided Approach. Sci China Inf Sci, for review 1 Introduction As the foundation of a computer system, the operating system (OS) is critical to the performance of all applications running on it, especially system-intensive applications that invoke kernel features extensively [1]. As a result, OS optimization has been studied extensively, which includes a vast of research work trying to optimize every aspect of an OS. Most of these efforts tend to solve specific performance problems in a general way that is suitable for various types of applications. In order to provide a "one for all" OS, tradeoffs are made to guarantee that the performance is consistently good for all applications. However, such a general-purpose OS is often suboptimal for a specific application. Many computers run only a very small set of applications or even a single application. For example, the computer behind an ATM machine typically runs only a single application. Many web servers run nothing but the Apache server. In the cloud era, as there are more and more servers, it becomes more prevalent to run a single application on each dedicated server or virtual machine, instead of running many applications on one server as in the past. As a result, we argue that application-specific operating system (ASOS) should be built to provide an optimal running environment for each application. ASOS was first proposed by Anderson [2] more than 20 * Corresponding author (email: yaoguo@pku.edu.cn) years ago. ASOS differs from ordinary OS in that it focuses on the performance of a specific application, instead of the overall performance for all possible applications. Figure 1 shows the general idea of ASOS, where each application runs on a dedicated OS kernel. Recent examples of ASOS are mostly based on the principle of library OS introduced by exokernel [3]. For example, unikernel [4] focuses on optimizing for the cloud and is adopted by LightVM [5] for its low memory footprint. Arrakis [6] and IX [7] are proposed for datacenter workloads. Although application-specific library OS can achieve significant size or speed improvement, it typically requires an entire reimplemention of the OS kernel, and even the applications running on it. It typically requires first manually identifying the performance bottleneck, and then redesigning and reimplementing the whole system. Although it is realistic to build an ASOS for some particular application or a set of applications, it will be almost impossible to build an ASOS for every application running on it. In this paper, we propose Tarax, a compiler-based approach that takes advantage of profile-guided optimizations to construct an ASOS for each application. Compared to previous approaches, Tarax does not need to modify OS source code when building an ASOS. Therefore we can build an ASOS in significantly fewer efforts. While most existing work could only develop an ASOS for one type of applications, Tarax can build a true ASOS that is specific to each application. Specifically, Tarax extends profile-guided optimization (PGO) in GCC to perform application-specific optimizations on the Linux kernel. PGO makes use of feedback collected from runtime profiling to guide the compiler optimization of a program. By employing runtime feedback, the compiler can provide more accurate optimizations than without the feedback. PGO is commonly used for user applications to improve performance. Well-known projects such as Firefox and PHP have already adopted this technique for a few years. GCC itself can also be built with PGO and shows about 7% speedup. We have demonstrated the feasibility of applying PGO to the Linux kernel to achieve speedups in a workshop paper [8]. This paper extends previous work, and proposes a more general solution in Tarax to build ASOS with the help of PGO. We also perform comprehensive analysis on the experimental results to provide insights on how profile information helps improve OS performance. Since PGO in GCC cannot be directly applied on the Linux kernel, we investigate the reasons why PGO does not work and make corresponding modifications to the Linux kernel and GCC to support kernel instrumentation and feedback collection. At the same time, to make GCC more suitable for building ASOS, we also modify GCC to produce smaller Linux kernel binaries when feedback is available. Overall, we have modified 1,017 lines of code in the Linux kernel and 148 lines of code in GCC and submitted some of our modifications to the open source community, some of which has already been accepted. We also automate the Tarax procedures so that an ASOS can be constructed with little user intervention, and no need to modify the Linux kernel code as well. We make the following main contributions in this paper: - We propose Tarax, a compiler-based approach to constructing ASOS, which achieves the “one kernel for one application” goal. Tarax is highly automated with a dedicated toolchain, such that users do not need to make any manual modifications to the kernel or to the target applications. - We conduct experiments on six popular server applications to demonstrate the effectiveness of Tarax. The results show that the performance of these applications improves by up to 16% when running on the Linux kernel optimized for each application. We also perform extensive analysis to reveal insights into the performance optimization opportunities arising from Tarax. 2 Tarax Overview and Challenges Our ultimate goal is to build ASOS automatically. To achieve this goal, Tarax is designed as a compiler-based approach that takes advantage of profile-guided optimizations. Figure 2 presents an overview of Tarax. We target to build application-specific Linux kernels for popular server applications such as Apache, MySQL and Redis. With the profile feedback from running individual applications, we rely on the compiler (i.e. GCC) to perform better optimizations on Linux kernel source code, and create kernel images optimized for the corresponding application. 2.1 PGO in GCC Profile-guided optimization (PGO) has been well studied in the compiler community [9]. The compiler attempts to mitigate the cost of a program’s generality by using feedback information such as control flow graph (CFG) and expression value profiles, which are collected in one or more previous program runs. The compiler then focuses its optimization efforts on the frequently executed portions of the program by understanding the run-time tendencies within these portions. PGO has been applied to large open source projects such as Firefox and Chrome. A typical PGO process consists of the following phases: - **Instrumentation.** The compiler instruments the target application during compilation in order to collect profile information that will be used for later optimizations. The profile information consists of control flow traces, value and address profiles, etc. - **Profile collection.** The instrumented target application is executed to collect profile information. The execution process should reflect real-world runtime scenarios. - **Optimization.** The compiler uses the profile information collected in the previous phase to optimize the target application. The profile information helps the compiler make better decisions on branch prediction, basic block reordering, function inlining, loop unrolling, etc. In GCC, PGO instrumentation can be enabled by turning on an option (-fprofile-generate). After instrumentation, one needs to run the application and collect profile data. Finally, the application is recompiled with an option (-fprofile-use) to turn on the compiler optimizations using the collected profile data: branch optimizations, basic block reordering, function inlining, register allocation, code partitioning, etc. Recent GCC versions also support sampling-based AutoFDO (automatic feedback directed optimizer), which does not require instrumentation. We will discuss it in §6.3. 2.2 Challenges Applying PGO to user applications such as Firefox can be performed by simply enabling the related options in GCC. However, applying it directly to the Linux kernel faces several technical challenges: 1. **How to enable kernel instrumentation.** To collect profile feedback from individual applications, the kernel should be instrumented. However, unlike user applications, some features in the Linux kernel conflict with compiler instrumentation, which may result in Linux failing to boot. 2. **How to collect profile information.** In order to enable profile collection, the compiler has some auxiliary libraries that the instrumented program should link against. But the Linux kernel is self-contained and does not allow linking against external libraries. 3. **When to collect profile information.** For an instrumented program, the profile feedback is collected on exit. However, the runtime behavior of the kernel is different since it never really exits. We need to collect profile feedback on-the-fly and carry on post processing. 4. **How to choose correct optimizations.** If profile feedback is available, the compiler can perform more aggressive optimizations that are otherwise disabled by default. Some optimizations will cause wrong code generation on certain kernel functions, resulting in build failure. 3 Tarax Design and Implementation 3.1 Tarax Design Figure 3 presents the architecture of Tarax, where the shaded components involve our modifications and implementation. To solve the challenges listed in §2.2, we make modifications to Linux, GCC, as well as the profile data files collected from Linux. We first need to design an approach to enable PGO instrumentation on the Linux kernel. Fortunately, the gcov subsystem of Linux shares the same instrumentation infrastructure and the same data format with the PGO implementation in GCC. The main difference is that gcov includes instrumentation capabilities on only CFG profiling. It does not support value profiling, which is required by PGO. In order to enable full kernel instrumentation, Tarax extends the gcov subsystem of Linux. We also modify GCC to adapt to kernel instrumentation. To collect profile information from the kernel, we make use of the existing debug filesystem (DebugFS) interface. To choose better optimizations for the kernel, we modify the optimization option handling logic in GCC to support better size-speed tradeoff. 3.2 Kernel Instrumentation Linux kernel instrumentation in Tarax is based on the gcov subsystem. It already supports the -fprofile-arcs instrumentation, which is used in coverage testing. In order to support full PGO instrumentation, we make modifications to the gcov subsystem to support value profiling, and also make modifications to handle various other issues. - **Value profiling.** To support profiling on values via instrumentation, we add the following profilers that are used in the instrumentation phase to the kernel gcov subsystem: indirect call profiler, ior profiler, average profiler, one value profiler, interval profiler, pow2 profiler, time profiler, and indirect call topn profiler. These profilers work together with the CFG arcs profiler, which is already supported in the Linux kernel. Besides these profilers, we also need to add profile merging functions, which are included in the auxiliary libraries of GCC. An instrumented program should link against these libraries, but the Linux kernel building process does not allow linking against external libraries. So we port these functions to Linux to keep the kernel code self-contained. • **Disabling TLS.** The PGO implementation in GCC is designed for user applications and makes use of thread-local storage (TLS) in value profiling. The TLS mechanism, which uses an extra segment register, requires kernel support. However, it is not available in the kernel itself. The kernel’s per-CPU allocation, which is similar to TLS, uses a different segment register and is not available before kernel initialization. So we disable this feature in kernel instrumentation. Specifically, we add the **--disable-threads** and **--disable-tls** options when configuring and building GCC. • **Selective instrumentation.** After the above modifications, the instrumented kernel may still not be able to boot because some functions, if instrumented, interfere with the self-patching mechanism in the Linux kernel. To solve this problem, we further modify Linux source file “arch/x86/kernel/paravirt.c”, using the function-specific option pragma **optimize** provided by GCC to disable value profiling instrumentation on incompatible functions including `paravirt_nop`, `paravirt_ident_32` and `paravirt_ident_64`. Moreover, the profiler functions themselves cannot be instrumented. So we disable instrumentation on the whole gcov subsystem in the makefile. ### 3.3 Profile Collection With the implementations described in the previous section, we have incorporated instrumentation capabilities to the Linux kernel. However, certain statistics such as counter summary and histogram, which are required by GCC during optimization, are calculated by an auxiliary library when the instrumented program exits. Although this is normal for user applications, such statistics for the kernel will be missing since the kernel does not actually exit after it boots up. To solve this problem, our implementations include: • We write a utility program to help calculate the counter summary and histogram after collecting profile data from the DebugFS interface. • Instead of collecting profile data at program exit, we collect the profile data of the kernel on-the-fly. • As the kernel never exits, we specify the start and the end of the profile collection process based on the start and the end of the target application running on it. ### 3.4 Application-Specific Optimizations With profile feedback available, the compiler can now perform application-specific optimizations on the kernel. We make the following efforts to fix optimization errors and improve its performance: • **Fixing optimization problems.** Some optimizations are incompatible with kernel code, resulting in assembler errors at kernel build time. For example, code reordering is incompatible with some kernel functions that have complex inline assembly, such as the function `static cpu_has_safe` in “arch/x86/include/asm/cpufeature.h”. To solve this issue, we disable optimization options on a per-source-file basis in the kernel makefile. The advanced compiler optimizations enabled by PGO may also cause kernel boot failure due to misoptimization. For example, the `schedule` function in “kernel/sched/core.c” may cause kernel panic if compiled with aggressive optimizations. We use the function specific option pragma **optimize** to disable optimizations on a per function basis. • **Selective size optimization.** Ideally, reducing kernel code size helps reduce instruction cache misses and improve kernel performance. However, if we turn on aggressive size optimization in GCC (**-Os**), it severely degrades kernel performance (§5.3). In order to make better size-speed tradeoffs, we modify GCC to enable aggressive size optimization only when the profile data shows that the whole translation unit is never executed. Specifically, we change the cost model of GCC x86 backend from `ix86_tune_cost` to `ix86_size_cost` to optimize for code size, set the code alignment constraint to one byte to remove function-level code padding, and turn off optimizations that may increase code size on the never-executed code. In this way, we are able to reduce the instruction cache footprint of the kernel as much as possible without sacrificing performance. ### 3.5 Workflow and Automation We automate Tarax with a dedicated toolchain, as shown in Figure 4. In the figure, the procedures in italic are automated using shell scripts. Only booting up the instrumented kernel and running the target --- 1) Translation unit is the input to a compiler from which an object file is generated. application require user intervention. The workflow includes the following steps: 1. **Preparation.** We patch the Linux kernel and GCC with the above modifications. Then we build a dedicated GCC binary for kernel optimization. 2. **Instrumentation.** We configure the kernel with `CONFIG_GCOV_KERNEL` and `CONFIG_GCOV_PROFILE_ALL` options enabled and set kernel makefile variable `CFLAGS_GCOV` to `-fprofile-generate`. Then we build the instrumented kernel with the `CC` variable set as our dedicated GCC. 3. **Profiling.** We boot the instrumented kernel and run the target application to collect kernel profile information from DebugFS. This step requires user involvement to run different applications. 4. **Optimization.** We disable `gcov`-related kernel options previously set on, and rebuild the kernel with makefile variable `KCFLAGS` set as `-fprofile-correction -Wno-error=coverage-mismatch -fprofile-use -fprofile-dir=/path/to/profile`. ## 3.6 Implementation Our implementation of Tarax is summarized as follows: - **Linux:** We have modified eight source files (1,017 lines of code), including five in the `gcov` subsystem, two in the x86-specific code, and “kernel/sched/core.c”. The modified `gcov` subsystem contains auxiliary libraries ported to support instrumentation and profiling (420 lines of code). - **GCC:** We have modified three source files (148 lines of code), including two in the coverage support code and one in the compiler driver. - **Utilities:** We have implemented 2 utilities (395 lines of C++ code) for profile data file processing. - **Scripts:** We have implemented six shell scripts to automate the building process. We have submitted two patches for the `gcov` subsystem to Linux and one of them has been accepted to the mainline kernel.\(^2\) We have also submitted a patch for GCC that improves optimization option handling, which is in the revision process to meet the GCC acceptance criteria. We also plan to release the automated toolchain to the public to encourage further research and improvement in this direction. ## 4 Experimental Setup ### 4.1 Environment Our experimental environment includes a test machine running the target applications and a client machine running benchmarking tools. Table 1 lists the experimental environment. The test machine and the client machine are connected via 10 Gigabit Ethernet. We choose Debian sid as the target Linux distribution for better hardware and software support. We also use *tmpfs* to avoid the uncertainty of disk I/O performance. ### 4.2 Benchmarking Methodology We conduct experiments on six server applications that are known to be system-intensive, namely Apache, Nginx, MySQL, PostgreSQL, Redis and Memcached. Table 2 lists the application versions used in our experiments. We first run the six server applications on the vanilla kernel and measure their performance via benchmarking tools. Then we carry out the optimization process described in §3.5 and get six optimized kernels for the six server applications, respectively. Finally, we run the target applications on their corresponding optimized kernels and measure their performance again. The characteristics of the six applications and their benchmarking configurations are as follows: \(^2\) Git commit ID is a992bf83. • **Apache**, the most popular web server, has been investigated in previous work [10] and proved to be system-intensive. We configure the web server to serve both static and dynamic requests. The response size ranges from 256 to 2048 bytes. We do not choose even larger response sizes to avoid network bandwidth saturation. On the client, we generate randomized requests, with the ratio of static:dynamic requests evenly distributed at 1:1. The tool we use is *ab*, the Apache HTTP server benchmarking tool. • **Nginx** is another popular web server. We use the same benchmarking settings as Apache. • **MySQL** is the most popular open-source relational database system, widely used in small websites for data management. The benchmarking tool we use is *dbt2*, an open-source implementation of the TPC-C benchmark specification. It is an online transaction processing performance test. The *dbt2* performance metric is NOTPM, the number of new order transactions processed in one minute. • **PostgreSQL** is another popular database system. The benchmarking tool we use is also *dbt2*. • **Redis** is the most popular key-value store, widely available on many cloud platforms. It is a mostly single-threaded program and makes use of event-driven techniques to achieve concurrency. The benchmarking tool we use is *memtier*. We configure it to generate randomized workloads with the ratio of get:set operations evenly distributed at 1:1. • **Memcached** is another popular key-value store. Compared with Redis, Memcached is multi-threaded and event-driven, but does not support data persistence. The benchmarking tool we use is also *memtier*. 5 Evaluation In our evaluation, we first perform experiments to compare the performance and code sizes of the Tarax-optimized kernels and the vanilla kernel. We then perform dynamic profiling on the kernels to collect detailed statistics on instruction cache misses and branches. Finally, we switch on specific GCC optimizations with and without profile feedback, respectively, to collect performance numbers. We use these experiments to answer the following questions: • What are the performance benefits of kernels optimized by Tarax in comparison to the vanilla kernel? Are the optimized kernels application-specific? (§5.1) • Is Tarax general enough to adapt to different workloads, different hardware architectures and different Linux versions? (§5.2) • How does Tarax affect kernel code sizes? (§5.3) • Where do the performance benefits come from? (§5.4) • Does the profile feedback really help the compiler to perform better optimizations? (§5.5) 5.1 Performance Comparison 5.1.1 Overall Performance We first compare the overall performance of the optimized kernels and the vanilla kernel. We run each benchmark five times and calculate the arithmetic means, which are shown in Table 3. The results show that Tarax achieves positive performance improvement consistently for all six applications, with improvement of more than 16% for Nginx. On average, application performance is improved by 8.8% when running on the corresponding optimized kernels.3) We also present the standard deviations of performance numbers from different test runs in Table 3. Although the standard deviations for specific applications may increase or decrease, they are all relatively low, which indicates that the performance improvement are stable throughout the experiments. The performance improvement numbers are in the same range as PGO on user applications. According to our experience, the JavaScript performance of Firefox improves by about 5% using PGO. A recent result on SPEC CPU2006 shows 4.5% improvement after applying PGO. 5.1.2 Cross Evaluation To investigate whether the optimized Linux kernels are really application-specific, we also run each application on kernels optimized for other applications. Figure 5 shows the result matrix, where all 3) All averages are calculated as geometric means throughout this paper, unless otherwise noted. Table 3 Application performance on the vanilla kernel and the kernels optimized by Tarax. <table> <thead> <tr> <th>Application (metric)</th> <th>Performance</th> <th>Improvement</th> </tr> </thead> <tbody> <tr> <td></td> <td>Vanilla</td> <td>Tarax</td> </tr> <tr> <td></td> <td>Mean/Stdev</td> <td>Mean/Stdev</td> </tr> <tr> <td>Apache (requests/s)</td> <td>61,843/0.16%</td> <td>69,186/0.71%</td> </tr> <tr> <td>Nginx (requests/s)</td> <td>255,397/0.25%</td> <td>298,443/0.30%</td> </tr> <tr> <td>MySQL (trans/min)</td> <td>70,499/0.25%</td> <td>74,489/0.43%</td> </tr> <tr> <td>PostgreSQL (trans/min)</td> <td>80,943/0.59%</td> <td>83,194/0.50%</td> </tr> <tr> <td>Redis (operations/s)</td> <td>367,807/0.45%</td> <td>396,407/0.23%</td> </tr> <tr> <td>Memcached (operations/s)</td> <td>427,715/0.80%</td> <td>464,129/0.23%</td> </tr> <tr> <td>Average (geomean)</td> <td>8.8%</td> <td></td> </tr> </tbody> </table> Figure 5 Performance speedup on different optimized kernels. For each application, the numbers are normalized to the performance when it runs on the kernel optimized for itself. numbers shown are normalized to the application performance on its own optimized kernel. If the optimized kernel is best-suited for the target application, the numbers on the diagonal should be the highest; all other numbers should be below 1 since they are running on kernels optimized for other applications. We can see that most of the results follow this pattern, with all applications except Memcached achieving the best performance on their own optimized kernel. For example, when we run Nginx on all six kernels, the performance of it running on other kernels ranges from 91% to 97% of the performance on its own kernel. On the other hand, although the performance of Memcached is generally good while running on other kernels, the performance of other applications running on the kernel optimized for Memcached could be as low as 93% of their best performance. One possible reason is that the kernel hot path of Memcached is a subset of other applications. The results show that we have created truly application-specific Linux kernels for each application. Running an application on an arbitrary (albeit) optimized kernel could degrade its performance by 9%. Note that Apache and Nginx exhibit different behaviors on these kernels even though they are both web server applications, which indicates that it is sometimes difficult to build a uniformly good kernel even for a set of applications with similar functionalities. 5.2 Sensitivity Analysis 5.2.1 Sensitivity on Workloads In our experiments, the workloads are generated randomly such that different workloads are used during profiling and testing. However, they still follow the same distribution. If the workload distribution of an application changes, will it affect the performance improvement achieved by Tarax? Figure 6 shows how workload changes may influence application performance of Nginx and Memcached. The kernel for Nginx is optimized with the ratio of static:dynamic requests set as 1:1. When the ratio changes, performance improvement of Nginx ranges from 15% to 21%. The kernel for Memcached is also optimized with the ratio of get:set operations set at 1:1. When the ratio changes, performance improvement of Memcached ranges from 8% to almost 10%. Workload changes influence the performance of Apache and Redis similarly. Since dbt2 generates randomized database workloads, we do not perform workload sensitivity analysis on MySQL and PostgreSQL. The results show that the optimized kernels are robust against application workload changes. 5.2.2 Sensitivity on Hardware Platforms How does Tarax perform on different hardware platforms? Figure 7 presents application performance improvement of the optimized kernels over the vanilla kernel on both Intel and AMD microprocessors. The AMD microprocessor we use in the comparison is FX-8350. We can see that performance improvements of PostgreSQL and Redis are higher on AMD, but the average performance improvement is higher on Intel. On average, Tarax achieves 7.6% performance improvement on AMD, which shows that it is still effective on a different hardware platform. 5.2.3 Sensitivity on Linux Versions How do different OS versions affect the optimization effectiveness? Figure 8 shows application performance on five different Linux versions: 3.16.7, 3.18.3, 4.0.8, 4.1.2 and 4.8.6. Because there are many changes between these Linux versions, we can see that the performance numbers vary significantly on some applications. For example, the performance improvement on Memcached ranges from 2% to 10%. However, the average performance improvement is steady and consistent, ranging from 7.5% to 10.7%, which indicates that Tarax is effective on different kernel versions. On Linux 4.8.6, Tarax achieves 10.7% average performance improvement, which is the highest among the five versions. It shows that Tarax is still effective along the Linux version evolution. 5.3 Kernel Code Size Comparison A smaller kernel is beneficial as it could reduce instruction cache misses (which will be shown later). Figure 9 compares the code sizes of the optimized kernels, with the vanilla kernel and the kernel compiled with aggressive size optimization -Os. We measure the .text section size of the kernel image. We can see that the optimized kernels are significantly smaller than the vanilla kernel compiled with the default -O2 option; but they are a little larger than the kernel compiled with -Os. We also compare the performance of the Tarax-optimized kernels and the kernel compiled with -Os, which is shown in Figure 10. We can see that the kernel compiled with -Os is much slower than the Tarax-optimized kernels; they are even slower than the vanilla kernel (-O2). This shows that aggressive size optimization could actually degrade the kernel performance; however, the compiler can make better decisions on size-speed tradeoffs with profile feedback. 5.4 Dynamic Profiling Analysis In order to explain how the application-specific kernels are optimized, we perform dynamic profiling with perf [11] to collect performance related statistics. The sampling range is 10 seconds during application execution. As perf supports profiling the kernel and user mode separately, we can calculate instruction cache (I-cache) miss rates for the kernel and the application, respectively. We use the number of executed instructions to approximate the number of I-cache accesses. We then calculate misprediction and taken rates of branch instructions in kernel mode. 5.4.1 Instruction Cache Statistics Figure 11 presents the statistics on I-cache for both kernels and applications. We can see that the I-cache miss rates for applications are reduced slightly in five of the six benchmarks (I-cache miss rate of Redis increases slightly from 0.37% to 0.41%). However, the I-cache miss rates for the kernels are significantly reduced; the biggest reduction is 2.17 percentage points for MySQL. For Memcached, the I-cache miss rate for the kernel is reduced by more than 58% (from 0.31% to 0.13%). The result shows that Tarax improves the I-cache metrics, which is a major contribution to kernel performance speedup. 5.4.2 Branch Optimizations Figure 12 shows the profiling results of branch instructions in the kernel mode. We expect that the compiler should make better decisions on branch prediction and code layout with profile feedback. Figure 12(a) shows that compiler branch prediction does not help reduce branch misprediction rate in the kernel, which is expected (explained in §2.1). Instead of reducing branch mispredictions, the compiler exploits branch probabilities to reduce the number of taken conditional branches. Figure 12(b) shows that the number of taken branches have been reduced by over 50% for some kernels. Figure 12(c) shows over 1/3 reduction in branch-taken rates for all optimized kernels. Please note that branch-taken rates on the vanilla kernel are all higher than 50%. because the compiler cannot reverse the condition and invert conditional branches used in loops without profile feedback. Reducing taken branches favors L-cache locality as well. It is another contribution to kernel performance speedup. 5.4.3 Function Inlining We also make use of clock cycle based sampling to see which kernel functions are live at runtime. Taking Apache as an example, Figure 13 shows the top 10 live kernel functions when it runs on the vanilla kernel and the optimized kernel, respectively. We can see that many of the top 10 functions in the two kernels are different. Taking the most frequently executed function thread_group_cputime as an example, it is invoked by function thread_group_cputime_adjusted, whose source code is shown in Figure 14. In the optimized kernel, thread_group_cputime does not appear in the sampling results, as it is inlined during optimization. Kernel developers do not manually inline the function because it is invoked in multiple places. In the vanilla kernel, the compiler does not inline the function because it acts conservatively without runtime profile.Inlining the function in all places it is invoked may bloat the kernel and hurt performance (which will be shown later). However, with profile information available, GCC is able to perform smarter inlining on places where the callee has been invoked most frequently. 5.5 Profile Feedback Analysis We manually control some GCC optimization options and compare their performance to see how runtime profile influences these optimizations. Specifically, we control option -finline-functions for function inlining and option -freorder-blocks-and-partition for code reordering. Figure 15 shows the results of profile feedback analysis, which are performance improvements of... void thread_group_cputime_adjusted( struct task_struct *p, cputime_t *ut, cputime_t *st) { struct task_cputime cputime; thread_group_cputime(p, &cputime); cputime_adjust(&cputime, &p->signal->prev_cputime, ut, st); } Figure 14 Code of kernel function thread_group_cputime_adjusted. Figure 15 Effects of profile feedback on different GCC optimizations, results shown are performance improvements of enabling the respective option over disabling it, with or without profile feedback. enabling the options over disabling them, with or without profile feedback. From Figure 15(a), we can see that the performance improvements of five applications are higher when performing function inlining with runtime profile. The profile feedback of PostgreSQL does not help the compiler perform better function inlining on the kernel. For Memcached, aggressive function inlining without profile feedback severely degrades performance by over 6%. Figure 15(b) shows that the performance improvements of all six applications are higher when performing code reordering with runtime profile. For PostgreSQL, code reordering without profile feedback degrades performance by 1.4%. The results show that runtime profile is beneficial for these two optimizations in most cases. Without profile feedback, aggressive optimizations could degrade performance. 6 Discussions 6.1 Application Scenario Tarax can be used as a general approach to improving application performance by optimizing the underlying kernel. We can use Tarax to adapt the kernel to any specific application or scenario. For example, when an application evolves to a new version, we can easily rebuild a new kernel to offer the best possible performance for the new version. This would be almost impossible with manual redevelopment. 6.2 Kernel Stability Guarantees In Section 3.4, we have disabled aggressive optimizations in PGO on particular files that caused trouble during optimization. We have not found any other cases that may affect kernel stability during runtime after applying Tarax. However, to further ensure that PGO does not introduce any instability to the kernel, we can disable all aggressive optimizations enabled by PGO. In this way, Tarax can apply the same optimizations as in -O2, with only the static branch predictions of GCC replaced by profile feedback. Thus we can guarantee that the Tarax-optimized kernels are as stable as the kernel optimized with -O2. Disabling aggressive optimizations may impact the speedup. However, based on our preliminary experiments on Linux 4.8.6, the speedup is still about 6% on average, which shows that profile feedback helps improve kernel performance even without these extra optimizations. 6.3 Sampling-Based Profiling Traditionally, PGO requires instrumentation to collect profile feedback. GCC have recently introduced AutoFDO, which can collect feedback using sampling-based profiling. We do not adopt AutoFDO in Tarax because it is limited to CFG arcs profiling and requires last branch record support from Intel processors. It cannot be applied to other processors or virtual machines. Moreover, the performance improvement of AutoFDO is 15–22% lower than PGO [12,13]. 6.4 Further Optimizations The current Tarax implementation makes few modifications to the existing optimizations in GCC. We have mainly tried to take advantage of existing optimizations in GCC to create application-specific Linux kernels. Although the current results are already promising, we expect that more aggressive optimizations could be applied along this direction. For example, we have shown that with profile information available, the compiler makes better size-speed tradeoffs in comparison to aggressive size optimization (-Os) (§5.3). In some cases, profile information actually degrades performance compared to no profile information (§5.5). These indicate that more fine-grained control on GCC optimizations can potentially achieve greater improvement. 6.5 Limitations During the implementation and evaluation of Tarax, we have made some choices and tradeoffs to stay focused on the main objective: to improve the application-specific performance of the Linux kernel. Application selection. This paper focuses on optimizing the kernel for server applications because many server applications are known to be system-intensive. Unlike server applications, desktop and mobile applications are mostly interactive. The performance problems in these applications often reside in their models of human-computer interaction, instead of in the kernel. As a result, these applications may not benefit much from Tarax. Experimental setup. In the experiments, we try to reduce factors that may influence application performance other than the kernel, which widely exist in real-world cases. For example, we use tmpfs to avoid the uncertainty of disk I/O performance. We also use high-speed network (10Gbps) to increase throughput and stress the kernel, as such network is already used in many cloud environments. Evaluation methodology. Since PGO is a machine learning like approach, strict evaluation may need separate training and testing inputs. For example, SPEC CPU2006 requires that only the training input can be used in PGO. However, we do not strictly follow the rules because we want to emphasize the potential benefits of Tarax. Although we use the same benchmarking tool for both training and testing, random workloads are generated for each execution. 6.6 Future Directions Building virtual appliances for cloud. As most cloud servers run virtual machines instead of OS on bare metal, we can extend Tarax to perform optimizations on the combination of the Linux kernel, the middleware and the application to create a specially optimized software stack which can be distributed and deployed as virtual appliances. We have applied Tarax and PGO to the famous LEMP stack, which consists of Linux, Nginx, MySQL and PHP, and achieved 5.4% performance improvement for WordPress. Link-time optimization (LTO). Previous work has explored link-time compaction and specialization techniques to reduce memory footprint of the Linux kernel [14]. Tarax can also be combined with LTO. Performing optimization at link-time provides the compiler with more chances to carry out inter-procedural analysis, which will become more accurate when profile feedback is available. Kernel refactoring. Another future direction is profile-guided kernel source code restructuring. The profile feedback can help us figure out the relations between kernel functions at runtime. We can use this information to eliminate unnecessary functions in the kernel with proper refactoring. We can also use the profile information to rearrange functions in translation units, which increases optimization opportunities. 7 Related Work We will discuss related work in three areas: application-specific operating systems, general kernel performance optimization, and feedback-directed optimization. 7.1 Application-Specific Operating Systems Anderson first proposed the idea of application-specific operating system [2], which is an application-specific design where as much of the operating system as possible is pushed into runtime library routines linked with each application. Earlier application-specific operating systems are based on kernel specialization [15], which can improve the performance of a specific system call. However, kernel specialization has not been applied to the whole kernel as Tarax does. Since exokernel [3] was proposed, much research has been conducted on pursuing application-specific OS kernels based on the principle of library OS. Modern library OSs use virtual machine monitors as the exokernel. The Mirage unikernels [4] are single-purpose appliances that are compile-time specialized with significant reduction in image sizes, and improvement in efficiency and security. Arrakis [6] and IX [7] are optimized for high I/O performance in datacenter workloads. They both adopt the library OS principle and use virtualization technologies to accelerate I/O. A library OS is specific to applications. Although it can achieve significant size and speed improvement, library OS typically requires entire reimplementation of the kernel, and even the applications running on it. Instead of reimplementing an application-specific kernel, Tarax leverages profile-guided recompilation, such that the kernel can be optimized for each application without source code modification to either the kernel or the application. Furthermore, we can apply PGO on customized OS kernels such as library OS, to bring extra application-specific performance benefits on top of the library OS design. 7.2 Kernel Optimization Improving kernel performance [16] is an everlasting topic in the OS research community. With every generation of computer innovations, there is extensive research work on how to improve the OS performance accordingly [17]. Besides research publications on kernel performance [10, 18], many more optimizations have been applied to the Linux kernel to fix performance bugs and improve its performance; but most of these implementation efforts have never been published. For Linux file system and memory management, 8% and 27.4% of patches are for performance optimization respectively [19,20]. All these performance optimizations to the kernel focus on specific performance problems, but are general to applications. Tarax pursues the opposite: it targets the whole kernel but is specific to each application scenario. 7.3 Feedback-Directed Optimization Feedback-directed optimization (FDO) is a more general concept than PGO. FDO is used to describe all techniques that alter the execution of a program based on tendencies observed in its present or past runs. PGO alters the execution of the target program via compilation, based on tendencies observed in the past runs of the program. Previous work have explored kernel performance improvement opportunities using profile-based compiler optimizations [21,22]. Although they share similar goals to Tarax, they perform kernel PGO on systems like HP9000/720 [21] and AS/400 [22], which have been outdated for decades. In contrast, Tarax applies PGO to the Linux kernel, which is much more complex, widely adopted and well supported. Profile feedback can also play an important role in specific optimizations for the kernel such as I-cache packaging [23] and on-demand code loading of infrequently executed code [24]. Although FDO/PGO techniques have been extensively used in user applications, they have not been widely adopted in the kernel. To the best of our knowledge, Tarax is the first comprehensive approach that enables PGO on the Linux kernel to achieve significant performance improvement. 8 Concluding Remarks We have presented Tarax, a compiler-based approach that takes advantage of PGO to construct application-specific operating systems. Specifically, Tarax extends the current PGO implementation in GCC to enable Linux kernel instrumentation, profiling and application-specific optimization. Experimental results on six popular server applications show that Tarax could improve the Linux kernel performance by up to 16%. Detailed analysis has provided insights on how profile feedback helps GCC perform better optimizations on the Linux kernel in an application-specific manner. With Tarax, we believe there will be abundant opportunities to improve the Linux kernel performance further for each application running on it. Acknowledgments This work was partly supported by the National Key Research and Development Program (No. 2017YFB1001904) and the National Natural Science Foundation of China (No. 61772042). Conflict of interest The authors declare that they have no conflict of interest. References
{"Source-Url": "http://sei.pku.edu.cn/~yaoguo/papers/Yuan-ScienceChina-18.pdf", "len_cl100k_base": 9553, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 46891, "total-output-tokens": 11828, "length": "2e13", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0004329681396484375, "__label__crime_law": 0.000335693359375, "__label__education_jobs": 0.0013074874877929688, "__label__entertainment": 0.00014221668243408203, "__label__fashion_beauty": 0.00021076202392578125, "__label__finance_business": 0.0004453659057617187, "__label__food_dining": 0.0004138946533203125, "__label__games": 0.0008449554443359375, "__label__hardware": 0.003063201904296875, "__label__health": 0.0008187294006347656, "__label__history": 0.0005307197570800781, "__label__home_hobbies": 0.00011771917343139648, "__label__industrial": 0.0006861686706542969, "__label__literature": 0.0003905296325683594, "__label__politics": 0.00031638145446777344, "__label__religion": 0.0006723403930664062, "__label__science_tech": 0.276123046875, "__label__social_life": 0.0001323223114013672, "__label__software": 0.01465606689453125, "__label__software_dev": 0.69677734375, "__label__sports_fitness": 0.0003151893615722656, "__label__transportation": 0.000789642333984375, "__label__travel": 0.00025153160095214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51619, 0.03643]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51619, 0.1892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51619, 0.88868]], "google_gemma-3-12b-it_contains_pii": [[0, 353, false], [353, 3743, null], [3743, 7716, null], [7716, 11588, null], [11588, 13854, null], [13854, 18314, null], [18314, 21615, null], [21615, 25625, null], [25625, 28512, null], [28512, 30877, null], [30877, 33568, null], [33568, 35357, null], [35357, 38107, null], [38107, 42379, null], [42379, 46783, null], [46783, 51619, null]], "google_gemma-3-12b-it_is_public_document": [[0, 353, true], [353, 3743, null], [3743, 7716, null], [7716, 11588, null], [11588, 13854, null], [13854, 18314, null], [18314, 21615, null], [21615, 25625, null], [25625, 28512, null], [28512, 30877, null], [30877, 33568, null], [33568, 35357, null], [35357, 38107, null], [38107, 42379, null], [42379, 46783, null], [46783, 51619, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51619, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51619, null]], "pdf_page_numbers": [[0, 353, 1], [353, 3743, 2], [3743, 7716, 3], [7716, 11588, 4], [11588, 13854, 5], [13854, 18314, 6], [18314, 21615, 7], [21615, 25625, 8], [25625, 28512, 9], [28512, 30877, 10], [30877, 33568, 11], [33568, 35357, 12], [35357, 38107, 13], [38107, 42379, 14], [42379, 46783, 15], [46783, 51619, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51619, 0.04603]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
1b99757f42dcb7e59c320ca6168f985dd674c9e9
Investigating Web APIs on the World Wide Web Maria Maleshkova, Carlos Pedrinaci, John Domingue Knowledge Media Institute (KMi) The Open University Milton Keynes, United Kingdom {m.maleshkova, c.pedrinaci, j.b.domingue}@open.ac.uk Abstract—The world of services on the Web, thus far limited to “classical” Web services based on WSDL and SOAP, has been increasingly marked by the domination of Web APIs, characterised by their relative simplicity and their natural suitability for the Web. Currently, the development of Web APIs is rather autonomous, guided by no established standards or rules, and Web API documentation is commonly not based on an interface description language such as WSDL, but is rather given directly in HTML as part of a webpage. As a result, the use of Web APIs requires extensive manual effort and the wealth of existing work on supporting common service tasks, including discovery, composition and invocation, can hardly be reused or adapted to APIs. Before we can achieve a higher level of automation and can make any significant improvement to current practices and technologies, we need to reach a deeper understanding of these. Therefore, in this paper we present a thorough analysis of the current landscape of Web API forms and descriptions, which has up-to-date remained unexplored. We base our findings on manually examining a body of publicly available APIs and, as a result, provide conclusions about common description forms, output types, usage of API parameters, invocation support, level of reusability, API granularity and authentication details. The collected data provides a solid basis for identifying deficiencies and realising how we can overcome existing limitations. More importantly, our analysis can be used as a basis for devising common standards and guidelines for Web API development. Keywords—Web APIs, RESTful services, Web services I. INTRODUCTION The world of services on the Web is increasingly dominated by Web applications and APIs, which seem to be preferred over “classical” Web services based on WSDL and SOAP. Web services have played and, without a doubt, will continue to play a major role for the development of loosely-coupled component-based systems within and between enterprises. However, Web APIs, also referred to as RESTful services [1] when conforming to the REST architectural principles [2], are characterised by their relative simplicity and their natural suitability for the Web, relying almost entirely on the use of URIs, for both resource identification and interaction, and HTTP for message transmission. On the basis of this simple technology stack, many Web sites like Facebook, Google, Flickr and Twitter offer easy-to-use, public APIs that provide simple access to some of the resources they hold, thus enabling third-parties to combine and reuse heterogeneous data coming from diverse services in data-oriented service compositions called mashups. Despite their popularity, the use of Web APIs still requires extensive manual effort, which is most often focused on the development of custom tailored software that can hardly be reused. A number of researchers and developers are devising generic solutions for better supporting the discovery, reuse, invocation, and composition of Web APIs [3], [4]. These approaches build upon the wealth of research on Web services and adapt it to deal with Web APIs. Yet, a quick look at some of the existing Web APIs shows significant differences when compared to classical Web services. The most notable distinction lies in the fact that there is no established interface definition language, although some researchers have already tried to address this aspect [5], [6]. In fact, as opposed to Web service technologies, work around Web APIs has evolved in a rather autonomous way, which is perhaps one of the main reasons for their rapid proliferation. Before any significant impact and improvement can be made to current Web API practices and technologies, we need to reach a deeper understanding of these. This involves, for instance, figuring out how current APIs are developed and exposed, what kind of descriptions are available, how they are represented, how rich these descriptions are, etc. It is only then that we shall be able to clearly identify deficiencies and realise how we can overcome existing limitations, how much of the available know-how on Web services can be applied and in which manner. To this end, in this paper we present a thorough analysis over a body of publicly available API descriptions. In particular, we analyse how Web APIs are published, we check which information is provided and its level of detail. We investigate the characteristics of input parameters and record the API categories. Similarly, we study the provided output descriptions and analyse the different types of APIs as well as the availability of relevant details such as the HTTP method, invocation URI and authentication requirements. We also record whether example requests and responses are provided, since they indicate how the communication between the client and the server is realised. Finally, we also study general API information, such as the number of mashups and operations, in order to be able to draw conclusions about the reusability and the granularity of the APIs. The analysis exposed in this paper provides a reality check over the current state and practices with Web APIs and certainly contributes to understanding where we are, helps us in better realising what needs to be done, and also assists us in devising supporting mechanisms. In this sense, we show that the current proliferation of Web APIs is not due to the increased use of REST principles, since according to our study, most Web APIs do not have RESTful descriptions and how APIs are described is not significant for reusability. Instead, simplicity and the trend towards opening data are driving the evolution that results in the world of services on the Web being increasingly dominated by Web applications and APIs. The remainder of this paper is structured as follows: Section II, describes the methodology used for conducting our Web API study, while Section III gives the collected data. A summary of the main results and a discussion of identified correlations and trends are provided in Section IV and Section V. Section VI presents an overview of existing work on analysing Web services and Section VII presents future work and concludes the paper. II. METHODOLOGY The study presented herein was conducted during February 2010, analysing 222 Web APIs from the ProgrammableWeb\(^1\) directory. ProgrammableWeb is a popular API directory, that at the time of this writing provides information about 2002 APIs and 4827 mashups. For easier search and browsing, the APIs are sorted in categories and our analysis covered all 51 categories, including on average 4 APIs per category. The analysed Web APIs for each category were randomly chosen, however, since some categories have only one or two entries, the analysed number of Web APIs per category varies. As a result the survey covered 18% of the REST APIs listed at ProgrammableWeb (1235 APIs at the time of the study). Therefore, we consider the following results to be representative for the directory and in general, since ProgrammableWeb is currently the biggest directory\(^2\). Each Web API description was analysed in terms of six main groups of features, including general Web API information, type of Web API, input parameters, output formats, invocation details and complementary documentation. The Web API analysis was conducted manually, and some features such as the type of Web API were examined twice in order to achieve greater accuracy. More concretely, each Web API was examined in terms of: 1) General Web API information – name of the API, description, category, number of mashups, date updated, URL and number of operations. 2) Type of Web API – details on whether the API description is RESTful, RPC-style or hybrid (for more details see section III-B). 3) Input parameters – does the API use default parameters, does it use optional parameters, does it use coded parameters (for example, instead of "English" use "en"), does it use parameters with alternative values (for example, the input value is 1 or 2 or 3), is the data-type of the input parameter stated and are boolean (yes/no, true/false) parameters used. 4) Output formats – form of the output (for example, XML or JSON) and whether it is sent as a parameter. 5) Invocation details – is the HTTP method provided, is the invocation URI provided, does the API require authentication and if yes, what type, how are the input parameters transmitted and how is the authentication information transmitted. 6) Complementary documentation – does the description provide example request, example response and a list of error messages/codes. We focus our analysis on studying precisely these groups of API features because each of them plays an important role for different aspects of the API use. The general information provides insights on the information that is commonly used to describe Web APIs in directories and how this information is captured, including temporal details, reusability and level of granularity. Since, an important part of current research work on APIs is focused on investigating and opposing different Web service types (REST vs. WSDL and SOAP) [3], we also record and analyse the existing types of Web APIs. We study input parameters, output formats and invocation details, since they serve as the basis for conducting main service tasks. These Web API features are present in all interface description languages (IDLs), as they are considered essential for invocation [3], composition [4] and discovery. The complementary documentation provides details on how the communication between the client and the service is realised, and what are the possible errors that can occur. The analysis approach involved a sequence of simple steps. First, for each API picked for the study, the ProgrammableWeb webpage was opened. The APIs to analyse were randomly chosen within each category, covering all categories. This was necessary in order to ensure that the results are domain-independent and at the same time representative for the whole directory. For each API the general information was recorded. Second, the provider’s Web API description was examined, recording the documentation URL, counting the number of operations and determining the type of the API. For RPC-style and hybrid operations each operation was counted, while for RESTful ones, each resource representation manipulation/retrieval through an HTTP method was counted as one operation. For example, GET on the User- Profile resource is one operation, while PUT on the same resource is another. We also analysed the input parameters of each operation, in case of RESTful services these are also referred to as the scope [1]. For the output of each API, the format was recorded, including the available alternatives and how they are chosen (through parameterisation or through a separate URI for the invocation). Finally, the invocation details, included in the description, and the complementary documentation were recorded. We did not perform any test invocations of the APIs, since we aim to gain a picture of the current state of the Web APIs landscape as depicted by their descriptions. Conducting the study took around three weeks, since the documentation of every Web API had to be review manually. In the process, we already noticed that the work was slowed down by the fact that the description forms and structures are very diverse and each API had to be examined from scratch, without being able to benefit from the analysis of previous APIs. This already provides some indication about the difficulties arising from having to deal with heterogeneous textual API documentation. III. ANALYSING COMMON WEB API DESCRIPTIONS In this section we describe the data collected from the Web API study. The results are structured into six groups, according to the different parts of the API descriptions that were analysed. A. General Web API Information The general Web API information analysed includes the recording of some details provided directly by the API directory, such as the name of the API, its description, the category that it is assigned to, the URI of the API and the latest update of the description. Table I provides the exact numbers for these features. Table I: General Web API Information <table> <thead> <tr> <th>Description</th> <th>Maximum</th> <th>Minimum</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>APIs per Category</td> <td>12</td> <td>1</td> <td>4</td> </tr> <tr> <td>Number of Mashups</td> <td>506</td> <td>0</td> <td>6.4</td> </tr> <tr> <td>Number of Operations</td> <td>over 200</td> <td>1</td> <td>15.5</td> </tr> </tbody> </table> Of these general details, the number of mashups is of particular relevance, since it provides an indicator of the reuse of Web APIs, and to a certain extent can help to highlight factors influencing the reusability of APIs. The analysis shows that a few APIs are highly reused, whereas most APIs, are used in very few or no mashups at all. In particular, there are 136 APIs with 0 mashups, 60 APIs with 1 to 4 mashups and 26 APIs with 5 to 506 mashups. The API with most mashups is Flickr, which can be easily integrated into different Web applications as a source of images and photos. In summary, there is a big difference in the frequency of use of some APIs (12%), while most APIs are not used often as part of mashups. Also it must be noted, that the number of mashups is as provided by ProgrammableWeb, therefore the actual values can somewhat differ. However, for comparison purposes it is still representative, since the data comes from the same source for all APIs. The general API information collected also delivers some valuable insights about the granularity, i.e. the number of operations, of the APIs. 109 of the APIs or about 50% have 1 to 7 operations, while 36 APIs or 16% have only 1 operation. 92 APIs have between 7 and 50 operations, where more APIs have fewer operations. Finally, only 21 APIs have between 50 and 200+ operations (Yahoo Ads). This leads us to the conclusion that the majority of the APIs are small and have very few operations. We investigated whether there is a correlation between the size of the APIs and their use as part of mashups, but even though social and community Web sites, seem to expose a larger number of operations, there are important exceptions such as del.icio.us³, which has only 15 operations but 142 mashups and geocoder⁴ with 3 operations but 28 mashups. The data provided no proof that there is a relation between the level of reusability of APIs and their granularity. The analysed API descriptions were updated between 02.06.2005 and 14.01.2010, which shows that the ProgrammableWeb directory has been enriched during the past five years, but also that some descriptions are old, which might be an indication that they are out of date. There were relatively few descriptions from 2005, 2006 and 2007 (11, 33, 27 correspondingly) and around 60 and 80 for 2008 and 2009. This might indicate either that there have been an increasing number of APIs published during the past two years or that older descriptions have been update, even though the APIs were created earlier. Since we do not have the date of creation of the API entries but only the update dates, we cannot make a conclusive statement. Finally, based on the general Web API information, our analysis highlighted that since all details are added manually to the Web API directory, some of the feature descriptions were not always accurate. This is especially true for the URL of the documentation, which was sometimes moved or no longer available, and for the authentication information, which was very often inaccurate. This is indicative for the difficulties resulting from using directories based on user entries, the two main ones being the retrieval of outdated information, because the entries cannot be automatically updated, and the retrieval of erroneous information, due to wrong or inaccurate user input. Therefore, despite the fact that currently these manually created directories are the easiest way to search for APIs, there is a need for developing approaches that automatically crawl and extract accurate API descriptions from the Web. ³http://delicious.com/help/api ⁴http://geocoder.us/help/ B. Type of Web APIs In this section we describe our findings regarding the different types of Web APIs and their frequency of use. We have identified three types of APIs: RESTful, RPC-style and Hybrid. RESTful services are defined as services, which conform to the representational state transfer (REST) paradigm [2]. REST is based on a set of constrains such as the client-server based communication, statelessness of the request and use of a uniform interface. A RESTful web service is commonly implemented by using HTTP, comprising a collection of uniquely identified resources and their links to each other. In addition, RESTful services are characterised by resource-representation decoupling, so that resource content can be accessed via different formats. For the scope of our study, we identify Web APIs as RESTful, when their descriptions indicate that they are resource-centred and data retrieval and manipulation is done only over the HTTP methods. Example APIs include MusicBrainz (http://wiki.musicbrainz.org/XMLWebService) and Doodle (http://doodle.com/xsd1/RESTfulDoodle.pdf). RESTful services can have a scope, or a set of parameters, to restrict the effect of the HTTP methods on the resource only to the ones determined by the parameter values. For example, instead of retrieving all news resources in the News collection by using GET (HTTP GET http://url/.../News) the API can also be invoked by including a parameter and retrieve only news created by a particular user (HTTP GET http://url/.../News?user=aUser). In comparison to RESTful APIs, RPC-style ones do not use directly the HTTP methods to access resources but rather define their own operations, wrapping the resource information, and then invoke these through one of the HTTP methods. For example, an RPC-style API, providing the same information as the news RESTful one, would look like: HTTP GET http://url/.../getNews and there can be a scope or a set of parameters (HTTP GET http://url/.../getNews?user=aUser). Example APIs include GeoNames (http://www.geonames.org/export/web-services.html) and Daylife (http://developer.daylife.com/docs). It is important to point out that we base our classification strictly on the API descriptions, since RPC API implementations can be wrapped and described as RESTful and RESTful implementations can have operations such as getNews, which are in fact realised by using the GET HTTP method on the News resource. Still our definitions of Web API types share common understanding with the ones given in [1], stating in essence that RPC APIs exposes internal functionalities through a complex programming-language-like interface that is different for every service, while resource oriented APIs exposes internal data through a simple document-processing interface that is always the same. Hybrid APIs, as the name suggests, represent a mix between RESTful and RPC ones. Hybrid-style APIs define their own operations, but employ operation information, which is contradictory to the used HTTP method. For example, a hybrid API can realise the getNews operation through POST and addNews through GET. Example hybrid APIs include ClearForest (http://www.opencalais.com/documentation/calais-web-service-api), which uses POST for getting resources and Box.net (http://developers.box.net/ApilOverview) where adding a new element can be done by using GET. The use of hybrid APIs can be very problematic since they do not guarantee operation safety, especially in cases where data manipulation is realised by using GET, because of the possibility of unintentional data modification. In such cases a simple crawler can change or delete resources, since it would use GET, expecting to retrieve information instead of altering it. Table II: Type of Web APIs <table> <thead> <tr> <th>Description</th> <th>In %</th> </tr> </thead> <tbody> <tr> <td>RPC-Style</td> <td>47.8</td> </tr> <tr> <td>RESTful</td> <td>32.4</td> </tr> <tr> <td>Hybrid</td> <td>19.8</td> </tr> <tr> <td>Mashups with RPC-Style APIs</td> <td>42</td> </tr> <tr> <td>Mashups with RESTful APIs</td> <td>34</td> </tr> <tr> <td>Mashups with Hybrid APIs</td> <td>24</td> </tr> </tbody> </table> Table II shows the distribution of the different types of APIs. As it can be seen, currently almost half of the Web APIs are RPC-style and about one third are RESTful. The hybrid APIs represent about 20% of the analysed data. This shows that even though RESTful services are by design suitable for the Web, since they are based on the same principles, their level of adoption is still relatively low. Instead of identifying resource collections and manipulating them with the help of HTTP methods, developers prefer to define their own operations, whose functionality sometimes even contradicts the used HTTP method (hybrid APIs). As a result, two thirds of the API descriptions are structured very much like common interface definitions, disregarding the REST principles. A very similar distribution can be detected among the APIs, which are reused as part of mashups. 42% of the APIs are RPC-style, 34% RESTful and 24% hybrid. Therefore we can conclude that API reuse is not driven by the type of description, since the mashups percentage distribution matches almost exactly the Web API distribution. As a result, we can argue that the current proliferation of Web APIs cannot be attributed to the use of RESTful services. As our study shows, most Web APIs do not have RESTful descriptions and how APIs are described is not significant for reusability. C. Input Parameters We also thoroughly analysed the information in the API descriptions, relating to the input parameters. As it can be seen in Table III about 60% of APIs use optional parameters, while 45% use default values. This has a strong effect on the matchmaking and invocation approaches, since one API can be found or not depending on whether optional parameters are taken into account or not. Similarly, if invocation is done on the basis of default values, the output results can be drastically changed. For example, a lot of APIs have XML as a default output format but some use also JSON as default. If the default parameter value is used, the results might be retrieved in the wrong format, making them useless. The fact that a lot of APIs use alternative values for one parameter (for example, a range of 1, 2 or 3) and coded values (for example, for languages only a language code, instead of the full string) makes the API invocation even more challenging. For the invocation of single APIs, the input data has to be transformed in the correct format, which can be very difficult, since sometimes the lists with alternative or coded values are not provided. For the invocation of mashups, the transformation between the inputs of one API and the outputs of the next one has to be defined. Currently, this work requires extensive manual effort and the adaption of existing Web service invocation approaches is hindered by the under-specification and the variability of the parameters. This situation is aggravated by the fact that two thirds of the APIs do not even state the data-type of the input parameters. As a result developers need to determine the proper input format by making assumptions or through trial-and-error. In addition, the reuse of existing invocation approaches or the development of new ones is made extremely difficult, since the data-type information is simply not available. If a standard interface description language, such as WSDL, were used to describe Web APIs, not specifying the data-types would be unthinkable. However, the current state of Web APIs shows us that this is not always necessary. Since there is no common IDL, under-specification is very common and it effects in no way the level of reuse of APIs. Our data showed that there is no correlation between stating the data-type of input parameters and the number of mashups. Table III: Input Parameters <table> <thead> <tr> <th>Description</th> <th>Number</th> <th>In %</th> </tr> </thead> <tbody> <tr> <td>APIs w/t optional parameters</td> <td>136</td> <td>61.3</td> </tr> <tr> <td>APIs w/t alternative values for a parameter</td> <td>114</td> <td>51.3</td> </tr> <tr> <td>APIs w/t default values for parameters</td> <td>99</td> <td>44.6</td> </tr> <tr> <td>APIs that state the data-type of the parameters</td> <td>61</td> <td>27.5</td> </tr> <tr> <td>APIs w/t coded values for a parameter</td> <td>55</td> <td>24.8</td> </tr> <tr> <td>APIs w/t boolean parameters</td> <td>39</td> <td>17.6</td> </tr> </tbody> </table> D. Output Formats As it can be seen in Table IV, there are two main common output formats – XML and JSON. XML is provided in 85% of the cases and JSON in 42%, while more than one third of the APIs provide both. Further output formats include HTML, CVS, RDF, Text, object, RSS, GFF, Serialised PHP, Tab, YAML. These results show that providing support for The way of specifying how the results should be structured can be determined in two ways. Either the API provides a separate operation for every output format or it is determined though a parameter. This might present a challenge for invocation, since currently there is no commonly accepted way for stating the desired output format. E. Invocation Details In this section we describe our findings in relation to the invocation details commonly provided in API descriptions. The collected data is of crucial importance, since it has a direct impact on the usability of the APIs. Table V: Invocation Details <table> <thead> <tr> <th>Description</th> <th>Number</th> <th>In %</th> </tr> </thead> <tbody> <tr> <td>Provide HTTP method</td> <td>134</td> <td>60.4</td> </tr> <tr> <td>Provide invocation URI</td> <td>214</td> <td>96.4</td> </tr> </tbody> </table> Table V shows that almost all descriptions provide the URI for invoking the API, while only about two thirds state the HTTP method to be used. This is possibly because providers assume that the method to use is GET, especially for APIs that can be invoked directly through parameterising the URI. E. Output Formats Table IV: Output Formats <table> <thead> <tr> <th>Description</th> <th>Number</th> <th>In %</th> </tr> </thead> <tbody> <tr> <td>XML</td> <td>80</td> <td>36</td> </tr> <tr> <td>XML, JSON</td> <td>53</td> <td>23.9</td> </tr> <tr> <td>XML and other</td> <td>34</td> <td>15.3</td> </tr> <tr> <td>XML, JSON and other</td> <td>23</td> <td>10.4</td> </tr> <tr> <td>only JSON</td> <td>12</td> <td>5.4</td> </tr> <tr> <td>only other</td> <td>14</td> <td>6.3</td> </tr> <tr> <td>JSON and other (except XML)</td> <td>6</td> <td>2.7</td> </tr> <tr> <td>RDF</td> <td>13</td> <td>5.8</td> </tr> <tr> <td>Total XML</td> <td>190</td> <td>85.6</td> </tr> <tr> <td>Total JSON</td> <td>94</td> <td>42.4</td> </tr> </tbody> </table> Our analysis also shows that more than 80% of the APIs require some form of authentication (Table VI). As it can be seen, using an API key (also called “developer key”, “developer token”, “token Id”, “user Id”, “user key”) is by far the most common way of authentication (38%). It is followed by 19% of APIs, which do not require any authentication. HTTP Basic and HTTP Digest [7] are not used as often (14%, 5%), while about 6% of the APIs use OAuth [8] and 5% implement their own operations, which need to be called, before being able to invoke other operations. There are also some APIs, which require authentication only for operations, which perform data modification but require no authentication for only reading resources. In summary, at least in 40% of the cases there is missing information required for the invocation of the APIs and 3 out of 4 APIs require some form of authentication, which means that developers would have to sign up with providers for acquiring the appropriate credentials. In addition, there is no established approach for Web API authentication but rather a landscape of different approaches. Also, about only a quarter of the APIs use a mechanism that protects the user credentials and does not transmit them directly in plain text. This shows that providers are not so much concerned with verifying the user identity and do not invest implementation work in securing the message transfer but rather prefer to employ simple measures for controlling resources usage. This is verified by the fact that less than 10% of the Web APIs use signatures and encryption. Table VII: Way of Transmitting Credentials <table> <thead> <tr> <th>Transmission Medium</th> <th>Number</th> <th>In %</th> </tr> </thead> <tbody> <tr> <td>URI</td> <td>117</td> <td>70%</td> </tr> <tr> <td>HTTP Header</td> <td>45</td> <td>27%</td> </tr> <tr> <td>URI or HTTP Header, Depending on the Type of Authentication and HTTP Method</td> <td>6</td> <td>3%</td> </tr> </tbody> </table> Table VII shows the most commonly used ways for transmitting authentication credentials. As it can be seen, 70% of the Web APIs send authentication information directly in the URI, while less that one third require that the HTTP header is constructed. This means that even if Web APIs require authentication, most of them do not need a custom client but can rather be invoked directly from a Web browser. These numbers are similar for invocation in general, where about one third of the APIs require the construction of the HTTP request, while the rest can be called by using the URI. F. Completeness of the Documentation Finally, in this section we present results for API description features, which are not strictly necessary for directly supporting service tasks such as discovery or invocation, but are useful when implementing and using the APIs. As Table VIII shows, more than 75% of the APIs provide example requests and responses. These give valuable information about the structure and the form of the request as well as of the retrieved results and, therefore, ease the development work. We also found out that about only half of the APIs describe the used error codes. This represents a problem, since in half of the cases developers cannot determine and have no indication of what went wrong and whether the error is due to an incorrect invocation, to the connection, to missing credentials, etc. IV. RESULTS As already pointed out, Web APIs face a number of challenges mainly related to the fact that currently all common service tasks such as discovery, composition and invocation require extensive manual effort. However, before any significant improvement can be achieved and suitable approaches can be devised, we need to gain a clear picture of the development process, used technologies, available information, richness of the descriptions, etc. In order to contribute directly towards this goal, in this section we derive a number of important results and conclusions, characterising the current Web API landscape. 1) Finding Web APIs on the Web requires either manual search, by using general-purpose search engines like Google and Yahoo or referring to directories like ProgrammableWeb, which are based on manual input that is sometimes inaccurate or outdated. This result points out one of the main challenges faced by current Web API repositories. Since the API descriptions are published and updated manually by users, some of the entries are not up-to-date or no longer exist. In addition, details such as the authentication method are not always accurate. Therefore, there is a need for developing solutions for a more automated way of collecting, publishing and updating API descriptions. 2) Few APIs are highly reused, whereas most APIs, are used in very few or no mashups at all. In addition, there is no correlation between the level of reusability of APIs and their granularity. Reusability, as indicated by the number of mashups per API, is a very important characteristic of the current Web API landscape. First, since we have no direct information about how many of the existing APIs are actually being used, the number of mashups is an indirect indication for that. Second, the frequent participation of APIs in mashups is reflected in the increased significance of certain service tasks, in this case composition, and the pieces of data required for supporting these tasks. This is made even more clear by the fact that the 222 APIs, analysed in our study, participated in a total of 1350 mashups. Therefore, future approaches... for supporting the use of APIs should especially focus on enabling the composition and creation of mashups. 3) There are three main types of Web API descriptions (RESTful, RPC-style and hybrid) but developers prefer to describe APIs in terms of operations, rather than resources. This means that each type of Web API requires separate invocation support, which makes it even more challenging to provide support for the invocation of mashups. Currently, mashup development is based on individual solutions, which have a low level of reusability and do not contribute to the automation of a common API invocation process. The fact that most developers prefer to describe APIs in terms of operations, disregarding REST principles can be explained by looking at popular ways for defining interfaces, which are commonly based on operations and methods. Therefore developers with previous knowledge of interface description languages and a background in programming intuitively tend to formulate Web APIs in terms of operations, rather than resources that are manipulated through the HTTP methods. 4) APIs reuse in not driven by the particular type of Web API description (RESTful, RPC-style or hybrid). Therefore, the current proliferation of Web APIs cannot be attributed to the use of RESTful APIs. We base this conclusion on the fact that the mashups percentage distribution matches almost exactly the Web API description type distribution. Our data shows no indication of RESTful APIs having a leading role in determining how APIs are described or whether they are used in mashups. 5) The description of input parameters is very flexible, allowing for the use of default values, coded values, alternative values and optional parameters. This presents a hindrance for all service tasks, especially invocation. Service tasks that predominantly rely on the input information, such as discovery, composition and invocation, gain complexity, since the presence of some parameters is non-restrictive and the input data has to be transformed into coded or alternative values. As a result the approaches, which aim to support the use of Web APIs, should be able to deal with the flexibility of the input parameters. This is especially true for invocation, which would require the development of an integrated view on all theses diverse input forms. 6) XML and JSON are establishing themselves as the main output formats. Even though there are no guidelines for the format of the output, currently most APIs give their results either in XML or JSON. Therefore, providing support for using and processing only these two formats, would directly contribute to the overall increase of Web API usability. 7) More than 80% of the APIs require some form of authentication. Therefore, authentication is a vital part of the invocation process and any approach for supporting the use of APIs and mashups that disregards authentication, has very limited applicability. Currently, developers have to sign up with multiple providers in order to acquire credentials necessary for APIs participating in mashups or restrict the implementations to APIs, which are based on shared credentials such as OAuth. 8) Most API descriptions are characterised by under-specification. Our data shows that two thirds of the APIs do not state the data-type of the input and 40% of the APIs do not state the HTTP method. If a standard interface description language, such as WSDL, were used to describe Web APIs, not specifying these details would be unthinkable. Since there is no common IDL, under-specification is very common and, more importantly, as our data shows it effects in no way the level of reuse of APIs. Looking at the different results provided in this section, it becomes obvious that currently the Web API landscape is very heterogeneous and it is not possible to determine what a typical Web API description looks like. Without a doubt, all descriptions contain common pieces of information, which are required for the support of main service tasks, such as discovery, composition and invocation. However, since Web API development is not guided by standards, the diversity spreads from the structure and the form of the documentation up to the technological principles used behind the implementation. Therefore, currently the use of APIs requires extensive manual effort and the development of automated approaches is very challenging. V. DISCUSSION In this section we reflect on a number of further trends and correlations that we discovered while conducting our Web API analysis. In particular, we describe how APIs from the same domain tend to have some similar features. One interesting correlation that we detected is that APIs from the same ProgrammableWeb category tend to have the same type of description. For example, all bookmarking APIs were RPC-style, while all project management ones were RESTful. This is also true for most of the categories, where we found out that the majority of the APIs have the same type. This might be due to developers investigating competing providers and their services and, therefore, being influenced by the way APIs with similar functionalities are structured and described. Also for some use cases it is more intuitive to base the description on resources, while for others using operations is more natural. For example, getting a set of news articles or information about an artist can easily be described based on resources (GET http://example.com/News and GET http://example.com/Artist?name=madonna), while determining a route between two locations or retrieving the temperature in a city can better be realised with operations. In addition to having similar types of descriptions, we discovered that APIs from the same category usually have similar authentication mechanisms. For example, most governmental and medical APIs require no authentication, while job search and general search APIs commonly use an API key. This again can be attributed to developers comparing their API with other APIs with similar functionality and as a result adapting similar authentication measures. However, certain domains should naturally be very accessible, while others related to more private or confidential information should be supported by stronger authentication measures. The survey also provided some important information about the Web API description forms. In particular, none of the analysed APIs used WSDL [6] or WADL [5] and the majority of the APIs are documented directly in HTML Web pages. A search for WADL documents in Google returns only around 160 matches, but certainly not all of these represent actual APIs (search was done on 19.05.2010). This number should be compared to over 2000 APIs currently registered in the ProgrammableWeb directory. In addition, some of the descriptions were in PDF, which requires downloading the documentation and makes crawling for APIs and automated processing more difficult. VI. RELATED WORK Up-to-date the current state of Web APIs, including different description forms, input types, invocation details, etc., has remain unexplored. However, there are two similar studies, devoted to investigating Web services on the Web. The authors in [10] provide a study on Web services, focusing on deriving statistics based on operations analysis, size analysis, words distribution and function diversity analysis by using the Google API. This study is based only on a few Web service characteristic and is restricted to only one source. A broader and more complete study is given by [9]. The authors have developed a crawler for collecting metadata about service interfaces available through repositories, portals and search engines. The gathered data is used to determine statistics about object sizes, type of technology and functioning of the Web services, among others. In comparison to previous studies, this one also provides conclusions about the status of Web services and what percentage of the Web services are considered to be active and responsive. VII. CONCLUSION AND FUTURE WORK Currently, finding, interpreting and invoking Web APIs requires extensive human involvement due to the lack of API machine-processable descriptions. However, before any significant progress and improvement can be made to the existing practices and technologies for Web APIs, we need to reach a deeper understanding of how APIs are developed and exposed, what kind of descriptions are available, how they are represented and how rich these descriptions are. In this paper, we contribute directly to this goal by providing a thorough analysis of the current state of Web APIs based on investigating six groups of main characteristic features including – general information, type of Web API, input parameters, output formats, invocation details and complementary documentation. By using the collected data, we can better realise what the current difficulties are, which problems need to be addressed, and how should supporting mechanisms be devised. In this sense, we show that RESTful services are not the driving force behind the current Web API proliferation and that Web API descriptions are characterised by under-specification, where important information such as the data-type and the HTTP method are commonly missing. Future work will involve the conduction of the study, over the same set of Web APIs, in one year. In this way we will have an updated view of the Web API landscape and can make statements about the changes and developments. In addition, we are planning on investigating some further correlations such as the ones between the domain and the lever of reuse, or the granularity and the domain. REFERENCES
{"Source-Url": "https://oro.open.ac.uk/24320/1/mmaWebAPISurvey.pdf", "len_cl100k_base": 8859, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26960, "total-output-tokens": 9419, "length": "2e13", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.00024020671844482425, "__label__education_jobs": 0.00061798095703125, "__label__entertainment": 6.759166717529297e-05, "__label__fashion_beauty": 0.00012099742889404296, "__label__finance_business": 0.0001976490020751953, "__label__food_dining": 0.0002346038818359375, "__label__games": 0.00031876564025878906, "__label__hardware": 0.0005784034729003906, "__label__health": 0.00034999847412109375, "__label__history": 0.0002484321594238281, "__label__home_hobbies": 4.3332576751708984e-05, "__label__industrial": 0.00018799304962158203, "__label__literature": 0.0002913475036621094, "__label__politics": 0.00016939640045166016, "__label__religion": 0.0003426074981689453, "__label__science_tech": 0.01461029052734375, "__label__social_life": 8.445978164672852e-05, "__label__software": 0.01389312744140625, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.0001538991928100586, "__label__transportation": 0.00029277801513671875, "__label__travel": 0.0001671314239501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43676, 0.02365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43676, 0.50505]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43676, 0.9122]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5314, false], [5314, 10775, null], [10775, 16531, null], [16531, 22133, null], [22133, 27279, null], [27279, 32626, null], [32626, 38178, null], [38178, 43676, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5314, true], [5314, 10775, null], [10775, 16531, null], [16531, 22133, null], [22133, 27279, null], [27279, 32626, null], [32626, 38178, null], [38178, 43676, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43676, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43676, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5314, 2], [5314, 10775, 3], [10775, 16531, 4], [16531, 22133, 5], [22133, 27279, 6], [27279, 32626, 7], [32626, 38178, 8], [38178, 43676, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43676, 0.27097]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
e34e004e1093fceab9a31fd01024fdde6ade62a2
A framework for the rigorous design of highly adaptive timed systems Cordy, Maxime; Legay, Axel; Schobbens, Pierre Yves; Traonouez, Louis Marie Published in: DOI: 10.1109/FormaliSE.2013.6612279 Publication date: 2013 Document Version Publisher's PDF, also known as Version of record Link to publication Citation for published version (HARVARD): https://doi.org/10.1109/FormaliSE.2013.6612279 General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. A Framework for the Rigorous Design of Highly Adaptive Timed Systems Maxime Cordy∗, Axel Legay†, Pierre-Yves Schobbens∗, and Louis-Marie Traonouez† ∗Precise Research Center, University of Namur, Belgium, {mcr,pys}@info.fundp.ac.be †Inria Rennes, France, {axel.legay,louis.marie.tranonouez}@inria.fr Abstract—Adaptive systems can be regarded as a set of static programs and transitions between these programs. These transitions allow the system to adapt its behaviour in response to unexpected changes in its environment. Modelling highly dynamic systems is cumbersome, as these may go through a large number of adaptations. Moreover, often they must also satisfy real-time requirements whereas adaptations may not complete instantaneously. In this paper, we propose to model highly adaptive systems as dynamic real-time software product lines, where software products are able to change their features at runtime. Adaptive features allow one to design systems equipped with runtime reconfiguration capabilities and to model changes in their environment, such as failure modes. We define Featured Timed Game Automata, a formalism that combines adaptive features with discrete and real-time behaviour. We also propose a novel logic to express real-time requirements on adaptive systems, as well as algorithms to check a system against them. We implemented our method as part of PyECDAR, a model checker for timed systems. Index Terms—Software Product Lines, Features, Real-time systems, Model-checking, Timed Games I. INTRODUCTION Computers play a central role in modern life and their errors can have dramatic consequences. Proving the correctness of computer systems is therefore an extremely relevant problem for which quality assurance techniques like model checking and testing provide efficient solutions. Testing consists in applying a finite series of test cases to the system. Although it can detect errors, it cannot guarantee their absence. Another of its limitations is that nowadays, systems are embedded and highly configurable, which makes it hard to specify relevant test cases. Model checking [1] is an automated technique for verifying systems against functional requirements. The approach relies on an exhaustive verification of a behavioral model of the system against a property expressed in temporal logic. If the system fails to satisfy the property, then the model checking algorithm provides an example of violation. By nature, model checking guarantees the absence of errors. Albeit it suffers from the so-called state-space explosion, it has been widely used and applied on both academic and industry case studies. Model checking was initially intended for closed and static Boolean systems, but has been extended to target increasingly wider classes of systems, including real-time systems. The recent advances in computer science pose new challenges to model checking. One of the major difficulties is that today’s systems often run in open and potentially unsafe environments, which requires them to adapt their behavior in order to accomplish their tasks reliably. In case of highly evolving environment, these adaptations must be performed as quickly as possible, hence the need for self-adaptive systems. We assume the environment is known a priori, but its characteristic may evolve non deterministically at runtime. These are harder to verify than static, closed systems of which behaviors are known a priori. Applying model checking to such systems requires to represent all its classes of behaviour as well as its capability to transit between them. Moreover, adaptive systems must satisfy multiple goals which may evolve over time and according to changes in the system or its environment [2]. One way to model an adaptive system is to view it as a set of static programs and transitions between these programs [3]. When the system has to adapt its behaviour, it triggers a transition to one of its other programs. The drawback of this approach is that all these programs must be modelled and verified individually. This approach has huge costs and is intractable. Another difficulty is the need to verify dynamic properties of adaptive systems. Classical logics cannot express them in a proper way. Alternatives to existing model checking techniques of adaptive systems are thus needed. The static programs composing an adaptive system likely share commonality, as they also have proper parts. An alternative is to organize the variability between these programs into features, a concept borrowed from software product line engineering (SPLE) [4]. In the latter discipline, a feature is an added functionality that meet a requirement of customers. A product of the line is thus obtained by composing desired features together. In the context of adaptive systems, features model differences between the static programs composing the system. Modifications in its behaviour are therefore triggered by changing its features. We name this process reconfiguration. Features constitute an appropriate modelling artifact to reason on runtime variability. Moreover, transposing this concept to adaptive systems permits to benefit from the formal verification techniques currently developed in SPLE. The behavior of adaptive systems often rely on real-time requirements such as matching deadline or reacting in real-time to fault. For example, a routing protocol must ensure that a data packet must reach the recipient within a certain amount of time (see more in Section II). Unexpected changes in the environment may prevent the satisfaction of these requirements, hence the need for the system to perform adaptations. The reconfiguration process is not always instantaneous, though. The system may require time to change its features, or can 978-1-4673-6292-4/13/$31.00 © 2013 IEEE have to delay the reconfiguration until it reaches a stable state. Unfortunately, most of existing model checking techniques for adaptive systems are not capable of handling such constraints. **Contributions.** In this paper, we propose a formal framework to model and verify adaptive systems that must satisfy evolving real-time requirements. We introduce *Featured Timed Game Automata* (FTGA), a formalism to represent adaptive behaviour, dynamic environment, and real-time. Our model results from the combination of (1) Adaptive Featured Transition Systems [4], a formalism to model dynamic reconfiguration, and evolving environment, and (2) timed automata [5], an established formalism for real-time systems. The semantics of a FTGA is defined as a timed game, where the system plays against the environment. Our formalism differs from existing game-based approaches [6] in that it concisely models reconfigurations of the systems and evolutions of the environment by exploiting the featured transition approach [7]. This latter provides even more flexibility to our method, which supports not only runtime configuration but also design-time variability. FTGA thus constitute an unified formalism to model the behaviour of real-time adaptive software product lines. As a second contribution, we propose a new temporal logic to express requirements on FTGA. In [4], we introduced *Adaptive Configuration Time Logic* (AdaCTL), a variant of the *Computational Tree Logic* (CTL) to reason on features and reconfigurations. The main differences between AdaCTL and CTL are that (1) the existential and universal quantifiers have a game-based semantics similar to Alternating Tree Logic (ATL) [8], and (2) the satisfaction relation returns a set of configurations rather than a Boolean value. In this paper, we go one step further and introduce *Timed Adaptive Configuration Time Logic* (T-AdaCTL), a real-time extension of AdaCTL, the semantics of which is inspired from Timed-ATL [9]. Finally, we design efficient model-checking algorithms to verify an adaptive system modelled as an FTGA against requirements expressed in T-AdaCTL. These algorithms extend efficient timed-game algorithms [10]. As a proof-of-concept, we implemented our method as part of PyECDAR, a model-checker for timed systems [11]. An extended version of the paper, with a more complete state of the art, and detailed algorithms, is available in [12]. **Structure of the paper.** In Section II, we introduce our running example. We define FTGA in Section III, whereas we introduce T-AdaCTL and our model checking algorithms in Section IV. We discuss our implementation in Section V. ### II. Introductory Example We present an example inspired by the TCP routing protocol described in [3]. We consider a routing protocol that can work in two different environments: a safe environment, where all the nodes are fully trusted, and an unsafe environment, where some nodes might be corrupted. In an unsafe environment, a message must be encrypted before it is sent. Every operation (routing, sending and encryption) requires time to complete. The behavior of the protocol in the two types of environment are modelled as timed automata in Fig. 1. The protocol must satisfy safety and liveness properties. When the environment is unsafe, all the messages must be encrypted before they are sent. When the environment is safe, the messages must be sent at most 20 time units after being received. In a changing environment, the protocol must switch between the two configurations in order to adapt itself. This reconfiguration is only possible in state received. We study two different implementations: in the first one, the reconfiguration can occur at most every 25 time units; in the second one, the reconfiguration can always be done but its application requires 5 time units. We want to determine in which implementations the system can satisfy its specifications. ### III. Featured Timed Games This section introduces the mathematical model we propose to represent real-time adaptive systems. It includes a representation of an open environment with which the system interacts in real-time. This environment evolves over time, and the system must adapt its behavior to cope with these variations. To make our models concise and facilitate reasoning, we represent both the different functional modes of the system and the state of the environment with adaptive features, i.e., features that can be enabled or disabled at runtime. In standard SPL, features usually model design-time variability and are thus not meant to be modified at runtime. Our formalism considers these features as a particular case of adaptive features. Therefore, it is flexible enough to support product lines of real-time adaptive systems. We first introduce the syntax of the model. Then we define its semantics as a timed game. We shall see that timed games are particularly suitable to reason on system’s reconfigurations with regard to change in the environment. Beforehand, we recall basic concepts to formally represent runtime variability and real-time. #### A. Encoding Variability and Real-Time Constraints **Variability.** In SPL, features usually designate units of difference between software products. We extend this notion to represent the possible adaptations of the system, as well as dynamic characteristics of the environment. Therefore, we define... distinguish between adaptive and static features, which may or may not change at runtime, respectively. Dependencies between features can be captured in a feature model. In this paper, we define a feature model as tuple \( \mathcal{d} = (F_s, F_a, F_e, [d]) \) where \( F_s \) is the set of features of the system, \( F_a \subseteq F_s \) contains its adaptive features, and \( F_e \) denotes the features of the environment. We assume that \( F_s \) and \( F_e \) are disjoint and denote their union by \( F \). A configuration of \( \mathcal{d} \) is any subset of \( F_s \cup F_e \). Therefore it denotes a particular variant of the system equipped with specific static and adaptive features, and deployed in a certain type of environment. Finally, \([d] \subseteq \mathcal{P}(F)\), where \( \mathcal{P} \) denotes the powerset, is the set of the valid configurations that satisfy the dependencies between the features. To express that the possible behaviors of the system and the environment may depend on their features, we extend the notion of feature expressions borrowed from featured transition system (FTS) [7]. FTS extends labelled transition systems, such that a transition may only be triggered by a restricted set of configurations. Each transition is labelled with a feature expression, that is a Boolean function \( \exp : \mathcal{P}(F) \rightarrow \{\top, \bot\} \) such that \( \exp(p) = \top \) iff \( p \) can execute the transition. We denote by \([\exp] \subseteq \mathcal{P}(F)\) the set of configurations that satisfy \( \exp \) and by \( \top \) the feature expression such that \( [\top] = \mathcal{P}(F) \). Further in this section, we show how we generalize feature expressions to handle reconfiguration and how we combine them with time constraints. **Real-time.** Timed Automata are an established formalism to represent real-time behavior. They extend labelled transition systems with real-time clocks of which value evolve as time passes. The clocks evolution and the discrete behaviors of the system are controlled by clock reset added to the transitions, and clock constraints. These constraints are either transitions guards that specifies when the system can execute a transition, or location invariants that defines when the system may remain in a given location. Examples of Timed Automata are shown in Fig. 1 to describe the models of the routing protocol. Let \( C \) be a finite set of clocks. A clock valuation over \( C \) is a function \( u : C \rightarrow \mathbb{R}_{\geq 0} \), that is, \( u \in \mathbb{R}^C_{\geq 0} \). Given two valuations \( u \) and \( v \), we write \( u+v \) for the valuation defined by \( (u+v)_x = u_x + v_x \). For \( \lambda \in \mathcal{P}(C) \), we write \( u[\lambda] \) for a valuation agreeing with \( u \) on clocks in \( C \setminus \lambda \), and setting to 0 the clocks in \( \lambda \). Let \( \mathcal{B}(C) \) denote all clock constraints \( \varphi \) generated by the grammar \( \varphi ::= x < k \mid x-y < k \mid \varphi \land \varphi \), where \( k \in \mathbb{Q} \), \( x, y \in C \) and \( \varphi \in \{\leq, \geq\} \). By \( \mathcal{U}(C) \subseteq \mathcal{B}(C) \), we denote the set of constraints restricted to upper bounds and without clock differences. For \( \varphi \in \mathcal{B}(C) \) and \( u \in \mathbb{R}^C_{\geq 0} \), we write \( u \models \varphi \) iff \( u \) satisfies \( \varphi \). For \( Z \subseteq \mathbb{R}^C_{\geq 0} \), we write \( Z \models \varphi \) iff \( u \models \varphi \) for all \( u \in Z \). We write \([\varphi]\) to denote the set of valuations that satisfy \( \varphi \). Then \( Z \subseteq \mathbb{R}^C_{\geq 0} \) is a zone iff \( Z = [\varphi] \) for some \( \varphi \in \mathcal{B}(C) \). To represent the behavior of system deployed in open environments, a model must distinguish between actions of the system from those of the environment. Timed Game Automata [6] are Timed Automata where actions are either controllable (actions of the system) or uncontrollable (actions of the environment). In this formalism, the satisfaction of properties is determined by solving a two-player timed game. **B. Featured Timed Game Automata** We are now ready to introduce Featured Timed Game Automata (FTGA) as a formalism to model product lines of real-time adaptive systems. FTGA result from the combination of the encodings presented above. It provides the following modelling facilities: 1. **Open environment.** An FTGA distinguishes between controllable and uncontrollable transitions. 2. **Real-time.** Clock constraints in invariants and transition guards model real-time constraints on the system and its environment. 3. **Variability.** Each transition is constrained by a feature expression that defines in which configurations the system or its environment can execute it. It allows one to differentiate between the capabilities of every configuration. 4. **Adaptations.** The transition relation also encodes which reconfigurations are possible upon the execution of an action by the system or its environment. Formally, FTGA are defined as follows. **Definition 1** An FTGA is a tuple \( \mathcal{G} = (\text{Loc}, l_0, C, \text{Act}, \text{Inv}, \text{Trans}, d, \gamma, \text{AP}, \mathcal{L}) \) where \( \text{Loc} \) is a finite set of locations, \( l_0 \in \text{Loc} \) is the initial location, \( C \) is a finite set of clocks, \( \text{Act} = \text{Act}_d \cup \text{Act}_e \) is a finite set of actions partitioned between controllable actions in \( \text{Act}_d \) and uncontrollable actions in \( \text{Act}_e \), \( \text{Inv} : \text{Loc} \rightarrow \mathcal{U}(C) \) associates an invariant to each location, \( \text{Trans} \subseteq \text{Loc} \times \text{Act} \times \mathcal{B}(C) \times \mathcal{P}(C) \times \text{Loc} \) is a set of transitions, \( d = (F_s, F_a, F_e, [d]) \) is a feature model, \( \gamma : \text{Trans} \rightarrow (\mathcal{P}(F) \times \mathcal{P}(F) \rightarrow \{\top, \bot\}) \) specifies for each transition which configurations can execute it, and how the configuration of the system and the environment can evolve, \( \text{AP} \) is a finite set of atomic propositions, \( \mathcal{L} : \text{Loc} \rightarrow 2^{\text{AP}} \) labels each location ith atomic propositions it satisfies. The adaptation process is encoded as part of function \( \gamma \). This function is defined such that only adaptive features may only be changed by controllable transitions, and environment features may only be changed by uncontrollable transitions. Formally, let \( \alpha = (l, a, \varphi, \lambda, l') \in \text{Trans} \). For any configurations \( c, c', e, e' \), if \( a \in \text{Act}_d \), \( \gamma(\alpha)(c \cup e, c' \cup e') \Rightarrow (c' \cup e') \subseteq F_s \land e' = e \) and if \( a \in \text{Act}_e \), \( \gamma(\alpha)(c \cup e, c' \cup e') \Rightarrow c' = c \). Moreover, any reconfiguration of the system or the environment must ensure that the new configuration is valid, that is, \( \gamma(\alpha)(c \cup e, c' \cup e') \Rightarrow c' \cup e' \in [d] \). This function provides a flexible encoding to restrict the reconfiguration process. In particular, it is able to specify the minimum and maximum amount of time needed to transit from a given configuration to another one. To that aim, one may define a self-loop transition constrained by a given clock, and annotated with an action that represents the reconfiguration process. **C. Game Semantics** An FTGA specifies the behavior of a set of systems, that is, one per valid configuration. The initial configuration of the A system will determine how its behavior may evolve over time. Indeed, static features cannot be changed at runtime and thus fix parts of the system capabilities. Similarly, reconfiguration is not always doable; the initial value of adaptive features may thus impede the system to perform actions early in the execution, which may lead to unavoidable errors. Accordingly, we define the semantics of an FTGA as a function $\tau : P(F_e) \to (\text{Loc} \times \mathbb{R}_0^C \times P(F_e) \times P(F_e))^*$ that associates an initial system configuration with its set of infinite executions. A state of the execution is a tuple $s = (l, u, c, e)$, where $l \in L$ is a location, $u \in \mathbb{R}_0^C$ is a clock valuation, $c \in P(F_e)$ is a system configuration and $e \in P(F_e)$ is an environment configuration such that $c \cup e \in [d]$. An initial state is $(l_0, 0, c_0, e_0)$, where 0 is the valuation that initializes all clocks to zero, and $c_0, e_0$ are the initial configuration of the system and the environment, respectively. Whereas the configuration of the system is an input of the semantics function, the initial configuration of the environment is uncontrollable and is thus chosen non-deterministically. Since we consider timed systems, an execution includes two types of transitions: - delay transitions: $(l, u, c, e) \rightarrow (l, u + \tau, c, e)$ if $\tau \in \mathbb{R}_{\geq 0}$ and $u + \tau \models \text{Inv}(l)$. - discrete transitions: $(l, u, c, e) \rightarrow (l', u', c', e')$ if $a \in \text{Act}$ and $\exists a = (l, a, \varphi, \lambda, l') \in \text{Trans}$, such that: $u \models \varphi$, $u' = u[\lambda]$ and $\gamma(a) (c \cup e, c' \cup e') \models \tau$. Finally, a run (or execution) in an FTGA is a sequence of states starting from an initial state and alternating delay and discrete transitions: $$\rho = s_0 \xrightarrow{\tau_0} s_0' a_1^1 s_1 \xrightarrow{\tau_1} s_1' a_2^2 s_2 \ldots s_n \xrightarrow{\tau_n} s_n' a_{n+1}^n s_{n+1} \ldots$$ Given that an FTGA considers continuous time, it specifies an infinite number of runs. Among the transitions executed during a run, some are controlled by the system and others are uncontrollable, i.e. executed by the environment. Also, the system controls how it reconfigures itself, but has no control on the configuration of the environment. The achievement of goals can thus be considered as a two-player game where the system plays against the environment. The strategy of one player prescribes a set of moves to perform according to the states previously visited. Each move consists of either delaying or executing an available action. A player can reconfigure itself only after executing an action. Formally, a strategy for the system is a function: $\text{Str}_C : (\text{Loc} \times \mathbb{R}_0^C \times P(F_e) \times P(F_e))^k \to (\text{Act}_c \times P(F_e)) \cup \{ \tau \}$ with $k \geq 0$. A strategy for the environment is defined symmetrically, except that the environment also selects its initial configuration. A strategy is valid iff (1) it complies with the transition relation and function $\gamma$, and (2) it does not lead to time-convergent or zeno runs [13]. From now on we consider valid strategies only. The game proceeds as a concurrent game. In a given state, if one player chooses to delay while the other chooses an action, then this action is performed and the corresponding transition is triggered. If both players select an action then the transition to execute is chosen non-deterministically. Given a system strategy $\text{Str}_C$ and an environment strategy $\text{Str}_E$, the possible outcomes of the game, noted $\text{Outcome}(\text{Str}_C, \text{Str}_E)$, are the set of infinite runs $\rho = s_0 \xrightarrow{\tau_0} s_0' a_1^1 s_1 \xrightarrow{\tau_1} s_1' a_2^2 s_2 \ldots s_n \xrightarrow{\tau_n} s_n' a_{n+1}^n s_{n+1} \ldots$ such that: - if $a_i \in \text{Act}_c$ then $\text{Str}_C(s_0, \ldots, s_i) = (a_i, c_{i+1})$. - if $a_i \in \text{Act}_c$ then $\text{Str}_E(s_0, \ldots, s_i) = (a_i, e_{i+1})$. - if $\tau_i \in \mathbb{R}_{\geq 0}$ then $\forall \tau_i' \in [0, \tau_i[. s_i \xrightarrow{\tau_i'} (l_i, u_i + \tau_i', c_i, e_i)$ and $\text{Str}_C(s_0, \ldots, s_i, (l_i, u_i + \tau_i', c_i, e_i)) = \text{Str}_E(s_0, \ldots, s_i, \tau_i') = \{ \tau \}$. where $s_k = (l_k, u_k, c_k, e_k)$ for any $k \in \mathbb{N}$. Example. Fig. 2 presents an FTGA modelling the routing protocol. The system actions are the plain transitions: route, reconfg, t-reconfig. The environment actions are the dashed transitions: init, receive, encryption, sent. The adaptive feature encrypt determines in which operation modes the system currently is. Two static features p-reconf and t-reconf specifies which of the two configuration methods the system can use (see Section II. Finally, the environment is described with a feature safe that specifies whether the current node in the network can be trusted or not. The function $\gamma$ is defined in two steps. First, feature expressions are added in the graph to the guard of the transitions, in order to specify which set of features enables the transition. Second, we specify the possible reconfigurations: - The system may only reconfigure the feature encrypt during the transitions labelled "reconfig". - The environment may only reconfigure the feature safe during the transitions labelled "sent" and "receive". In consequence, a possible strategy for the environment is to start in a safe configuration, do the init action at $y = 25$, then the receive action at $y = 30$, and disable the feature safe during this transition. In reaction, the system strategy can be to start with the system feature p-reconf while the adaptive feature encrypt is disabled, then wait until the environment reaches the location Received. At this point it can do a reconfg action immediately, and enable the feature encrypt during the transition. Finally, at $x = 10$ it performs the route action to reach the location RoutedUnsafe. The outcome produced by these two strategies is: $$\text{Init} : \left\{ \begin{array}{ccc} \text{Init} & [x = 0] & \{ \text{p-reconf} \} \text{Safe} & [y = 0] & \{ \text{ready} \} \end{array} \right\}$$ $$\text{Received} : \left\{ \begin{array}{ccc} \text{Received} & [0] & \{ \text{p-reconf} \} \text{unsafe} & [25] & \{ \text{p-reconf} \} \end{array} \right\}$$ $$\text{RoutedUnsafe} : \left\{ \begin{array}{ccc} \text{RoutedUnsafe} & [0] & \{ \text{p-reconf} \} \text{unsafe} & [10] & \{ \text{p-reconf} \} \end{array} \right\}$$ IV. Timed AdaCTL Model-Checking To express requirements on real-time adaptive systems, we propose T-AdaCTL, a timed extension of the Adaptive Configuration Time Logic (AdaCTL), a logic we recently introduced to reason on reconfigurable systems. We first present its syntax and semantics, and then provide algorithms to check an FTGA against a T-AdaCTL formula. A. Timed AdaCTL The formulae of T-AdaCTL are embedded into three levels. The first level is the feature formula, which has the form $\Psi := [\chi] \Phi$ where $\chi$ is a feature expression and $\Phi$ is a state formula. Intuitively, $[\chi] \Phi$ defines that if the current configuration of the system and the environment satisfies $\chi$, then the current state must satisfy $\Phi$. Feature formulae can thus define requirements on specific configurations, or even forbid some others. A state formula has the form $\Phi := \top \mid a \mid \Psi_1 \land \Psi_2 \mid \neg \Psi \mid A \varphi \mid E \varphi$ where $a \in AP$, $\Psi$, $\Psi_1$ and $\Psi_2$ are feature formulae, and $\varphi$ is a path formula. Intuitively, a state satisfies $A \varphi$ (resp. $E \varphi$) if from this state, the system can come up with a strategy of which the outcome will (resp. may) satisfy $\varphi$. The path formulae have the form $\varphi ::= \Psi_1 U_1 \Psi_2 \mid \Psi_1 W_1 \Psi_2$ where $\Psi, \Psi_1$, and $\Psi_2$ are feature formulae, $I$ is an interval of $\mathbb{R}_{\geq 0}$ with integral bounds, $U_1$ is called the until operator and $W_1$ is called the weak until operator. T-AdaCTL extends AdaCTL with a time constraint attached to the until operator, in the same manner as TCTL [5] extends CTL. We omit the next operator of AdaCTL as there is no notion of direct successor in timed systems. Two path operators can be derived from $U$ and $W$: eventually ($\Diamond$), such that $\Diamond \Psi = U \top \Psi$, and forever ($\Box$), such that $\Box \Psi = \Psi W \bot$. When it comes to state and path formulae, TATL [9] is a generalisation of T-AdaCTL, as it can express more general time constraints and requirements on the environment too. However, it does not include any notion of features, which makes it inappropriate for expressing properties on our feature-based formalism. Example. Let us express the properties that the routing protocol must satisfy in T-AdaCTL. The property “If the environment is unsafe, all the messages must be encrypted before they are sent.” can be expressed by the formula $A[(\neg \text{safe}) \rightarrow \text{RoutedSafe}]$. This formula specifies that the system can never reach the location RoutedSafe if the environment is not safe. The property “If the environment is safe, the messages must be sent at most 20 time units after being received.” can be expressed by the formula $A[(\text{safe}) \rightarrow \text{Received} \Rightarrow A[0, 20] \text{Ready}]$. It specifies that whenever location Received is reached in a safe environment, the location Ready must be reached within 20 time units. Definition 2 Let $G$ be an FTGA and $s = (l, u, c, e)$ one of its states. Then the satisfiability of a T-AdaCTL feature or state formula by $G$ in state $s$ is determined as follows: $G, s \models \phi$ if $s$ is reached in a location $\text{Outcome}(s, L_F, L_E) \circ G, \rho \models \varphi$. The semantics of path formulae is similar to that of TCTL path formulae: $G, \rho \models A \varphi$ if $\forall \rho' \in L_F \circ A \varphi \models \varphi$ $G, \rho \models E \varphi$ if $\exists \rho' \in L_F \circ E \varphi \models \varphi$ $G, \rho \models \Psi_1 U \Psi_2$ if $\exists \rho' \in L_F \circ (\forall r \geq 0 \circ G, \rho[r] \models \Psi_1 \land \forall 0 \leq \rho'[r] < r \circ G, \rho'[r] \models \Psi_2)$ where $\rho[r]$ the state reached in $\rho$ at time $r$. Note that we assume a continuous-time semantics for timed path operators [14]. We now define the satisfaction of a T-AdaCTL formula by an FTGA. Contrary to classical temporal logics, this relation, noted $\models_F$ is not Boolean: it is defined as the set of initial system configurations such that the FTGA satisfies the formula from its initial state. Definition 3 Let $G$ be an FTGA and $\Psi$ a T-AdaCTL formula. $\{c_0 \in P(F) \mid \exists c_0 \in P(F) \cdot c_0 \cup c_0 \in [d] \land \forall c_0 \in P(F) \cdot c_0 \cup c_0 \in [d] \Rightarrow G, (l_0, 0, c_0, c_0) \models \Psi\}$ B. Model-Checking Algorithms The semantics of T-AdaCTL is defined over execution paths, of which FTGA contain an infinite number. This means that a model checking procedure for T-AdaCTL must use a symbolic representation to capture this infinite number of runs in a finite data structure. To represent the time domain of symbolic states, we extend the grammar of clock constraints with the negation. Figure 2: FTGA of the routing protocol. root is $\Psi$ itself, whereas the leaves are atomic formulae. Then, starting from the leaves, we associate each subformula by the set of symbolic states that satisfy it. This method is similar to the one used to check CTL formulae [15]. We present how to compute the set of symbolic states that satisfy each form of T-AdaCTL formula. For feature and state formulae, the satisfaction rules are the following: $$\text{Sat}(\{l \mid l \in \text{Loc}\}) = \text{Sat}(\Phi) = \{l, \neg\chi, \top \} \cup \{l \mid l \in \text{Loc}\}$$ $$\text{Sat}(\top) = \{l, \top, \top \} \cup \{l \mid l \in \text{Loc}\}$$ $$\text{Sat}(s) = \{l, s, \top \}$$ $$\text{Sat}(\Psi_1 \land \Psi_2) = \{((l, b_1 \land b_2, \varphi_1 \land \varphi_2) \mid (l, s_1) \in \text{Sat}(\Psi_1) \land (l, s_2) \in \text{Sat}(\Psi_2)\}$$ $$\text{Sat}(\neg\Psi) = \overline{\text{Sat}(\Psi)}$$ where for any $S \in L \times \mathcal{P}(F) \times [0, \infty) = \mathbb{R}_{\geq 0}$, the complement of $S$ is defined as $\overline{S} = \{(l, b, \varphi) \mid \exists (l, b, b', \varphi') \in S : [b] \cap [b'] \neq \emptyset \}$ Computing $\text{Sat}(\neg\Psi)$ and $\text{Sat}(\neg\Phi)$ comes down to solving a two-player game where the system is the verifier and the environment is the spoiler. To that aim, we perform a backward fixed-point computation as it is performed for solving timed games in [6]. The algorithms are based on discrete predecessors and safe timed predecessors operators. The definition of these operators in FTGA takes into account both variability and real-time, which makes it different from other game-based formalisms. It constitutes the cornerstone and the real novelty of our verification algorithms. Formally, let $\alpha = (l, a, \varphi, a, \lambda, l') \in \text{Trans}$ and $(l', b', \varphi')$ be a symbolic state. We define the discrete predecessors $\text{Pred}_d(l', b', \varphi') = (l, b, \varphi)$ such that: - $b = \{c \cup e \mid \exists(e' \cup e') \in b' \cdot \gamma(\alpha)(c \cup e, e' \cup e') = \top\}$ - $\varphi = \text{free}(\varphi) \land \{x = 0 \mid x \in \lambda, \lambda, \varphi \land \varphi \land \text{Inv}(l), where$ Observe that the distributivity law applies to this operator: $\text{Pred}_d(\bigcup_i s_i) = \bigcup_i \text{Pred}_d(s_i)$. The discrete predecessors operator can be used to compute the controllable (resp. uncontrollable) moves that allow the system (resp. the environment) to reach (resp. avoid) a winning state. However, these moves may not be safe as the other player may perform concurrent moves. Formally, given a location $l$ and the sets of winning states $\text{Win}(l')$ for each location $l'$, these controllable moves are: $$\text{Next}_c(l, \text{Win}) = \bigcup_{\alpha=(l, a, \varphi, a, \lambda, l')} \text{Pred}_d(\text{Win}(l'))$$ The uncontrollable moves of the environment are defined symmetrically. The winning moves are obtained through the safe timed predecessors operator. Let $s_1 = (l, b_1, \varphi_1)$ and $s_2 = (l, b_2, \varphi_2)$ be two symbolic states, the safe timed predecessors of $s_1$ wrt. $s_2$ are the states that can reach $s_1$ while avoiding any state from $s_2$. They are given by $\text{Pred}_s(s_1, s_2) = \{(l, b_1 \land \neg b_2, \text{Pred}_s(\varphi_1, \bot)), (l, b_1 \land b_2, \text{Pred}_s(\varphi_1, \varphi_2))\}$ where $\text{Pred}_s(\varphi_1, \varphi_2)$ is the safe timed predecessors operator for zones as defined in [6]. This operator computes step by step the strategy of one player to reach a winning state, whatever strategy is played by the other player. It has the following property: $\text{Pred}_s(\bigcup_i g_i, \bigcup_j b_j) = \bigcup_i \bigcap_j \text{Pred}_s(g_i, b_j)$. In what follows, we also denote by $\text{Pred}(l)$ the locations from which there is a transition to $l$. To enforce that the players’ strategies are valid, we compute the deadlock states, which are the states beyond the locations invariant that should not be reached if one player had an urgent action to perform. We denote by $DL_2(l)$ (resp. $DL_1(l)$) the deadlock states in location $l$ for which the system (resp. environment) is responsible. Algorithm 1 computes $\text{Sat}(A\Psi_1 \cup I_2 \Psi_2)$. The algorithm begins with the winning symbolic states that satisfy $\Psi_2$ (Lines 3–7), and next performs a backward exploration (Lines 8–19) to discover predecessors of winning states that satisfy $\Psi_1$ and that the environment cannot impede the system to reach. To check that the time spent to reach the goal in $\Psi_2$ satisfies the interval constraint $I$, an additional clock, named $\text{clock}$, is added to the model; this is a standard way to handle timing constraints of logical formula. This extra clock is initialized in $I$ (Line 4) and then decreases during the backward exploration. $\text{Sat}(A\Psi_1 \cup I_2 \Psi_2)$ is the set of winning states for which the value of the extra clock is zero (Lines 20–25). We use a similar procedure to compute $\text{Sat}(A\Psi_1 \cup \Psi_2)$, that is presented in the extended version [12]. In this case, it starts from the states that violate the formula, and performs a backward exploration to compute the states from which the system cannot guarantee to avoid losing states. Then, the set of winning states is the complement of those states. The algorithms used to compute $\text{Sat}(A\Psi_1 \cup I_2 \Psi_2)$ and Sat(ΕΨ₁ W Ψ₂) also use similar procedures. The main differences are that the two players now cooperate in order to reach the goal expressed by the path formula. As an example, an algorithm for Sat(ΕΨ₁ U Ψ₂) can be obtained by adding Next(e(l, Win)) to set of states Good in Line 11 of Algorithm 1, and by removing Lined 12-13 (hence setting the set of bad states to empty set in timed predecessors). Deadlock states must be avoided. Indeed, the two players cooperate. In case of a deadlock, they both lose. V. IMPLEMENTATION Our modelling formalism and the associated algorithms have been implemented on top of PyECDAR[11], a tool for the analysis of timed systems. In PyECDAR, a model is written in an XML file that follows the format of UPPAAL tool set[16]. This allows us to reuse the intuitive user interface provided by UPPAAL. In our extension, we use two variables for each adaptive features that defines its value before and after the reconfiguration following the transition. This encoding is sufficient to entirely represent the function γ. Additional patterns can be used to facilitate the design of the system. For example, feature expressions can be used in the invariant of a location, which offer another way to specify possible reconfigurations. Similarly, assignments can be used to forbid the reconfiguration of an adaptive feature during a transition. The original game algorithms of PyECDAR were limited to safety and reachability objectives specific to timed specifications. We have implemented the new game algorithms presented in this paper and the recursive procedure that checks a T-AdaCTL formula. To encode continuous time, we use federations, which are finite unions of DBMs, implemented in UPPAAL DBM Library. We encode feature expressions with BDDs, implemented using PyCUDD library (python bindings for CUDD library [17]). Using these two libraries we have implemented a new encoding for symbolic states with both features and time domains, as well as an encoding for finite unions of symbolic states. Therefore, the different operators (union, intersection, negation, discrete predecessors, timed predecessors) are implemented in PyECDAR for unions of symbolic states. Example. We consider again the routing protocol modelled in Fig. 2. We first use PyECDAR to check it against the T-AdaCTL formula Ψ₁ = A□([−safe]¬RoutedSafe). PyECDAR computes the satisfaction relation for the formula, which is given by p-reconf ∨ t-reconf ∨ encrypt. It means that the formula is satisfied iff any of the two reconfiguration features are enabled, or feature encrypt is initially enabled. Then, we verify the formula Ψ₂ = A□([safe]Received ⇒ A[0, 20]Ready), and we obtain the following result: p-reconf ∨ encrypt. Finally, we consider the formula Ψ₃ = A□(([−safe]¬RoutedSafe) ∧ ([safe]Received ⇒ A[0, 20]Ready)) that combines the previous ones. The satisfaction relation is now restrained to p-reconf, which proves that in order to satisfy both properties at the same time the system requires the p-reconf feature. Note that this result is not the same as the conjunction of the two previous results. Indeed, solving the formula Ψ₁ ∧ Ψ₂ comes down to finding configurations in which the system has either strategies to satisfy Ψ₁ or strategies to satisfy Ψ₂, but there may exist no strategy that satisfies both goals. VI. CONCLUSION This paper presents a new formal model for highly adaptive real-time systems. Our approach relies on a combination of adaptive feature transition systems with timed automata. The semantics of our model is given as a timed game, which views the system and the environments as concurrent entities. In our setting, requirements are expressed in a new logic called T-AdaCTL, for which we provide a model checking procedure. We have implemented our approach as an extension of the PyECDAR toolset. The new tool has been applied to academic case studies. Our next objective is to evaluate our approach on real-life case studies. REFERENCES
{"Source-Url": "https://pure.unamur.be/ws/portalfiles/portal/11772764/06612279.pdf", "len_cl100k_base": 10209, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 32290, "total-output-tokens": 12088, "length": "2e13", "weborganizer": {"__label__adult": 0.00044846534729003906, "__label__art_design": 0.000453948974609375, "__label__crime_law": 0.0005445480346679688, "__label__education_jobs": 0.0008540153503417969, "__label__entertainment": 0.00012218952178955078, "__label__fashion_beauty": 0.00021195411682128904, "__label__finance_business": 0.0003402233123779297, "__label__food_dining": 0.000553131103515625, "__label__games": 0.001850128173828125, "__label__hardware": 0.0010843276977539062, "__label__health": 0.0007581710815429688, "__label__history": 0.00039577484130859375, "__label__home_hobbies": 0.00012218952178955078, "__label__industrial": 0.0006046295166015625, "__label__literature": 0.0004296302795410156, "__label__politics": 0.00043392181396484375, "__label__religion": 0.0005512237548828125, "__label__science_tech": 0.0772705078125, "__label__social_life": 0.00011146068572998048, "__label__software": 0.0063018798828125, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0005202293395996094, "__label__transportation": 0.0008187294006347656, "__label__travel": 0.00028252601623535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44158, 0.02903]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44158, 0.46804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44158, 0.87558]], "google_gemma-3-12b-it_contains_pii": [[0, 1658, false], [1658, 7481, null], [7481, 12904, null], [12904, 20515, null], [20515, 27437, null], [27437, 31913, null], [31913, 37312, null], [37312, 44158, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1658, true], [1658, 7481, null], [7481, 12904, null], [12904, 20515, null], [20515, 27437, null], [27437, 31913, null], [31913, 37312, null], [37312, 44158, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44158, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44158, null]], "pdf_page_numbers": [[0, 1658, 1], [1658, 7481, 2], [7481, 12904, 3], [12904, 20515, 4], [20515, 27437, 5], [27437, 31913, 6], [31913, 37312, 7], [37312, 44158, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44158, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
50f876cb8883b0e3913ab20c3a72277a183a0c05
Lecture 3: Parallel Programming Abstractions (and their corresponding HW/SW implementations) Parallel Computing Stanford CS149, Fall 2020 Today’s theme is a critical idea in this course. And today’s theme is: Abstraction vs. implementation Conflating abstraction with implementation is a common cause for confusion in this course. An example: Programming with ISPC ISPC - Intel SPMD Program Compiler (ISPC) - SPMD: single program multiple data - http://ispc.github.com/ - A great read: “The Story of ISPC” (by Matt Pharr) - https://pharr.org/matt/blog/2018/04/30/ispc-all.html Recall: example program from last class Compute $\sin(x)$ using Taylor expansion: $\sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ...$ for each element of an array of N floating-point numbers ```c void sinx(int N, int terms, float* x, float* result) { for (int i=0; i<N; i++) { float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! int sign = -1; for (int j=1; j<=terms; j++) { value += sign * numer / denom; numer *= x[i] * x[i]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } ``` Invoking \texttt{sinx()} \textbf{C++ code: main.cpp} ```cpp #include "sinx.h" int main(int argc, void** argv) { int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here sinx(N, terms, x, result); return 0; } ``` \textbf{C++ code: sinx.cpp} ```cpp void sinx(int N, int terms, float* x, float* result) { for (int i=0; i<N; i++) { float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! int sign = -1; for (int j=1; j<=terms; j++) { value += sign * numer / denom; numer *= x[i] * x[i]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } ``` **sinx() in ISPC** **C++ code: main.cpp** ```cpp #include "sinx_ispc.h" int main(int argc, void** argv) { int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here // execute ISPC code ispc_sinx(N, terms, x, result); return 0; } ``` **ISPC code: sinx.ispc** ```cpp export void ispc_sinx( uniform int N, uniform int terms, uniform float* x, uniform float* result) { // assume N % programCount = 0 for (uniform int i=0; i<N; i+=programCount) { int idx = i + programIndex; float value = x[idx]; float numer = x[idx] * x[idx] * x[idx]; uniform int denom = 6; // 3! uniform int sign = -1; for (uniform int j=1; j<=terms; j++) { value += sign * numer / denom numer *= x[idx] * x[idx]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[idx] = value; } } ``` **SPMD programming abstraction:** Call to ISPC function spawns “gang” of ISPC “program instances” All instances run ISPC code concurrently Each instance has its own copy of local variables (blue variables in code, we’ll talk about “uniform” later) Upon return, all instances have completed Invoking sinx() in ISPC C++ code: main.cpp ```c++ #include "sinx_ispc.h" int main(int argc, void** argv) { int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here // execute ISPC code ispc_sinx(N, terms, x, result); return 0; } ``` SPMD programming abstraction: Call to ISPC function spawns “gang” of ISPC “program instances” All instances run ISPC code concurrently Each instance has its own copy of local variables Upon return, all instances have completed In this illustration programCount = 8 **sinx() in ISPC** “Interleaved” assignment of array elements to program instances **C++ code: main.cpp** ``` #include "sinx_ispc.h" int main(int argc, void** argv) { int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here // execute ISPC code ispc_sinx(N, terms, x, result); return 0; } ``` **ISPC code: sinx.ispc** ``` export void ispc_sinx( uniform int N, uniform int terms, uniform float* x, uniform float* result) { // assumes N % programCount = 0 for (uniform int i=0; i<N; i+=programCount) { int idx = i + programIndex; float value = x[idx]; float numer = x[idx] * x[idx] * x[idx]; uniform int denom = 6; // 3! uniform int sign = -1; for (uniform int j=1; j<=terms; j++) { value += sign * numer / denom numer *= x[idx] * x[idx]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[idx] = value; } } ``` **ISPC language keywords:** - **programCount**: number of simultaneously executing instances in the gang (uniform value) - **programIndex**: id of the current instance in the gang. (a non-uniform value: “varying”) - **uniform**: A type modifier. All instances have the same value for this variable. Its use is purely an optimization. Not needed for correctness. Interleaved assignment of program instances to loop iterations “Gang” of ISPC program instances In this illustration: gang contains four instances: \texttt{programCount} = 4 **ISPC implements the gang abstraction using SIMD instructions** **C++ code:** main.cpp ```c++ #include "sinx_ispc.h" int main(int argc, void** argv) { int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here // execute ISPC code ispc_sinx(N, terms, x, result); return 0; } ``` **SPMD programming abstraction:** Call to ISPC function spawns “gang” of ISPC “program instances” - All instances run ISPC code simultaneously - Upon return, all instances have completed **ISPC compiler generates SIMD implementation:** - Number of instances in a gang is the SIMD width of the hardware (or a small multiple of SIMD width) - ISPC compiler generates a C++ function binary (.o) whose body contains SIMD instructions - C++ code links against generated object file as usual **sinx() in ISPC: version 2** “Blocked” assignment of elements to instances ### C++ code: main.cpp ```cpp #include "sinx_ispc.h" int main(int argc, void** argv) { int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here // execute ISPC code ispc_sinx_v2(N, terms, x, result); return 0; } ``` ### ISPC code: sinx.ispc ```ispc export void ispc_sinx_v2( uniform int N, uniform int terms, uniform float* x, uniform float* result) { // assume N % programCount = 0 uniform int count = N / programCount; int start = programIndex * count; for (uniform int i=0; i<count; i++) { int idx = start + i; float value = x[idx]; float numer = x[idx] * x[idx] * x[idx]; uniform int denom = 6; // 3! uniform int sign = -1; for (uniform int j=1; j<=terms; j++) { value += sign * numer / denom numer *= x[idx] * x[idx]; denom *= (j+3) * (j+4); sign *= -1; } result[idx] = value; } } ``` Blocked assignment of program instances to loop iterations Elements of output array (results) <table> <thead> <tr> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> </tr> </thead> </table> “Gang” of ISPC program instances In this illustration: gang contains four instances: `programCount = 4` Schedule: interleaved assignment “Gang” of ISPC program instances Gang contains four instances: programCount = 4 <table> <thead> <tr> <th>Instance 0 (programIndex = 0)</th> <th>Instance 1 (programIndex = 1)</th> <th>Instance 2 (programIndex = 2)</th> <th>Instance 3 (programIndex = 3)</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>4</td> <td>5</td> <td>6</td> <td>7</td> </tr> <tr> <td>8</td> <td>9</td> <td>10</td> <td>11</td> </tr> <tr> <td>12</td> <td>13</td> <td>14</td> <td>15</td> </tr> </tbody> </table> Single “packed vector load” instruction (_mm_load_ps1) efficiently implements: float value = x[idx]; for all program instances, since the four values are contiguous in memory // assumes N % programCount = 0 for (uniform int i=0; i<N; i+=programCount) { int idx = i + programIndex; float value = x[idx]; ... Schedule: blocked assignment “Gang” of ISPC program instances Gang contains four instances: programCount = 4 <table> <thead> <tr> <th>Instance 0 (programIndex = 0)</th> <th>Instance 1 (programIndex = 1)</th> <th>Instance 2 (programIndex = 2)</th> <th>Instance 3 (programIndex = 3)</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>4</td> <td>8</td> <td>12</td> </tr> <tr> <td>1</td> <td>5</td> <td>9</td> <td>13</td> </tr> <tr> <td>2</td> <td>6</td> <td>10</td> <td>14</td> </tr> <tr> <td>3</td> <td>7</td> <td>11</td> <td>15</td> </tr> </tbody> </table> float value = x[idx]; Now touches four non-contiguous values in memory. Need “gather” instruction to implement (gather is a more complex, and more costly SIMD instruction: only available since 2013 as part of AVX2) Raising level of abstraction with foreach C++ code: main.cpp ```cpp #include "sinx_ispc.h" int N = 1024; int terms = 5; float* x = new float[N]; float* result = new float[N]; // initialize x here // execute ISPC code sinx(N, terms, x, result); ``` ISPC code: sinx.ispc ```ispc export void ispc_sinx( uniform int N, uniform int terms, uniform float* x, uniform float* result) { foreach (i = 0 ... N) { float value = x[i]; float numer = x[i] * x[i] * x[i]; uniform int denom = 6; // 3! uniform int sign = -1; for (uniform int j=1; j<=terms; j++) { value += sign * numer / denom numer *= x[i] * x[i]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } ``` foreach: key ISPC language construct - **foreach** declares parallel loop iterations - Programmer says: these are the iterations the gang (not each instance) must perform - ISPC implementation assigns iterations to program instances in the gang - Current ISPC implementation will perform a static interleaved assignment (but the abstraction permits a different assignment) ISPC: abstraction vs. implementation - Single program, multiple data (SPMD) programming model - Programmer “thinks”: running a gang is spawning programCount logical instruction streams (each with a different value of programIndex) - This is the programming abstraction - Program is written in terms of this abstraction - Single instruction, multiple data (SIMD) implementation - ISPC compiler emits vector instructions (e.g., AVX2) that carry out the logic performed by a ISPC gang - ISPC compiler handles mapping of conditional control flow to vector instructions (by masking vector lanes, etc. like you do manually in assignment 1) - Semantics of ISPC can be tricky - SPMD abstraction + uniform values (allows implementation details to peek through abstraction a bit) ISPC discussion: sum “reduction” Compute the sum of all array elements in parallel **Correct ISPC solution** ```ispc export uniform float sumall2( uniform int N, uniform float* x) { uniform float sum; foreach (i = 0 ... N) { partial += x[i]; } // from ISPC math library sum = reduce_add(partial); return sum; } ``` sum is of type uniform float (one copy of variable for all program instances) x[i] is not a uniform expression (different value for each program instance) Result: compile-time type error ISPC discussion: sum “reduction” Compute the sum of all array elements in parallel Each instance accumulates a private partial sum (no communication) Partial sums are added together using the `reduce_add()` cross-instance communication primitive. The result is the same total sum for all program instances (`reduce_add()` returns a uniform float) The ISPC code at right will execute in a manner similar to handwritten C + AVX intrinsics implementation below.* ```c float sumall2(int N, float* x) { float tmp[8]; // assume 16-byte alignment __mm256 partial = _mm256_broadcast_ss(0.0f); for (int i=0; i<N; i+=8) partial = _mm256_add_ps(partial, _mm256_load_ps(&x[i])); _mm256_store_ps(tmp, partial); float sum = 0.f; for (int i=0; i<8; i++) sum += tmp[i]; return sum; } ``` *Self-test: If you understand why this implementation complies with the semantics of the ISPC gang abstraction, then you’ve got a good command of ISPC SPMD programming model summary - **SPMD** = “single program, multiple data” - Define one function, run multiple instances of that function in parallel on different input arguments ISPC tasks - The ISPC gang abstraction is implemented by SIMD instructions that execute within on thread running on one x86 core of a CPU. - So all the code I’ve shown you in the previous slides would have executed on only one of the four cores of the myth machines. - ISPC contains another abstraction: a “task” that is used to achieve multi-core execution. I’ll let you read up about that. Part 2 of today’s lecture - Three parallel programming models - That differ in what communication abstractions they present to the programmer - Programming models are important because they (1) influence how programmers think when writing programs and (2) influence the design of parallel hardware platforms designed to execute them efficiently - Corresponding machine architectures - Abstraction presented by the hardware to low-level software - We’ll focus on differences in communication/synchronization Three programming models (abstractions) 1. Shared address space 2. Message passing 3. Data parallel Shared address space model What is memory? - On the first day of class, we described a program as a sequence of instructions - Some of those instructions read and write from memory - But what is memory? - To be precise, what I’m really asking is: what is the logical abstraction of memory as presented to a program A program’s memory address space - A computer’s memory is organized as an array of bytes - Each byte is identified by its “address” in memory (its position in this array) (in this class we assume memory is byte-addressable) "The byte stored at address 0x8 has the value 32." "The byte stored at address 0x10 (16) has the value 128." In the illustration on the right, the program’s memory address space is 32 bytes in size (so valid addresses range from 0x0 to 0x1F) The implementation of the linear memory address space abstraction on a modern computer is complex. The instruction “load the value stored at address X into register R0” might involve a complex sequence of operations by multiple data caches and access to DRAM. Shared address space model (abstraction) Threads communicate by reading/writing to shared variables Thread 1: ```c def spawn_thread(x): x = 0 spawn_thread(1, &x) // write to address holding // contents of variable x x = 1; ``` Thread 2: ```c void foo(int* x) { // read from addr storing // contents of variable x while (x == 0) {} // write to address holding print x; } ``` (Pseudocode provided in a fake C-like language for brevity.) A common metaphor Image credit: Shared address space model Threads must synchronize their reads and writes to shared variables Synchronization primitives are also shared variables: e.g., locks Thread 1: ```c int x = 0; Lock my_lock; spawn_thread(foo, &x, &my_lock); mylock.lock(); x++; mylock.unlock(); ``` Thread 2: ```c void foo(int* x, lock* my_lock) { my_lock->lock(); x++; my_lock->unlock(); print x; } ``` Review: why do we need mutual exclusion? - Each thread executes - Load the value of `diff` from location in memory into register `r1` *(this stores a copy of the value in memory in the register)* - Add the register `r2` to register `r1` - Store the value of register `r1` into `diff` - One possible interleaving: (let starting value of `diff`=0, `r2`=1) <table> <thead> <tr> <th>T0</th> <th>T1</th> <th>T0 reads value 0</th> <th>T1 reads value 0</th> </tr> </thead> <tbody> <tr> <td><code>r1 ← diff</code></td> <td><code>r1 ← diff</code></td> <td>T0 sets value of its <code>r1</code> to 1</td> <td>T1 sets value of its <code>r1</code> to 1</td> </tr> <tr> <td><code>r1 ← r1 + r2</code></td> <td><code>r1 ← r1 + r2</code></td> <td></td> <td></td> </tr> <tr> <td><code>diff ← r1</code></td> <td><code>diff ← r1</code></td> <td>T0 stores 1 to <code>diff</code></td> <td>T1 stores 1 to <code>diff</code></td> </tr> </tbody> </table> - Need this set of three instructions must be “atomic” Mechanisms for preserving atomicity - Lock/unlock mutex around a critical section ``` LOCK(mylock); // critical section UNLOCK(mylock); ``` - Some languages have first-class support for atomicity of code blocks ``` atomic { // critical section } ``` - Intrinsics for hardware-supported atomic read-modify-write operations ``` atomicAdd(x, 10); ``` Review: shared address space model - Threads communicate by: - Reading/writing to shared variables in a shared address space - Inter-thread communication is implicit in memory loads/stores - Thread 1 stores to X - Later, thread 2 reads X (and observes update of value by thread 1) - Manipulating synchronization primitives - e.g., ensuring mutual exclusion via use of locks - This is a natural extension of sequential programming - In fact, all our discussions in class have assumed a shared address space so far! HW implementation of a shared address space Key idea: any processor can directly reference contents of any memory location * Caches (not shown) are another implementation of a shared address space (more on this in a later lecture) Shared address space HW architecture Example: Intel Core i7 processor (Kaby Lake) Intel Core i7 (quad core) (intercconnect is a ring) Intel’s ring interconnect Introduced in Sandy Bridge microarchitecture - Four rings - request - snoop - ack - data (32 bytes) - Six interconnect nodes: four “slices” of L3 cache + system agent + graphics - Each bank of L3 connected to ring bus twice - Theoretical peak BW from cores to L3 at 3.4 GHz is approx. 435 GB/sec - When each core is accessing its local slice SUN Niagara 2 (UltraSPARC T2): crossbar interconnect Note area of crossbar (CCX): about same area as one core on chip Eight cores 72 cores, arranged as 6 x 6 mesh of tiles (2 cores/tile) YX routing of messages: - Message travels in Y direction - “Turn” - Message travels in X direction Non-uniform memory access (NUMA) The latency of accessing a memory location may be different from different processing cores in the system. Example: latency to access address $x$ is higher from cores 5-8 than cores 1-4. Example: modern dual-socket configuration. Bandwidth from any one location may also be different to different CPU cores. Summary: shared address space model - **Communication abstraction** - Threads read/write variables in shared address space - Threads manipulate synchronization primitives: locks, atomic ops, etc. - Logical extension of uniprocessor programming * - **Requires hardware support to implement efficiently** - Any processor can load and store from any address - Can be costly to scale to large numbers of processors (one of the reasons why high-core count processors are expensive) * But NUMA implementation requires reasoning about locality for performance Message passing model of communication Message passing model (abstraction) - Threads operate within their own private address spaces - Threads communicate by sending/receiving messages - send: specifies recipient, buffer to be transmitted, and optional message identifier (“tag”) - receive: sender, specifies buffer to store data, and optional message identifier - Sending messages is the only way to exchange data between threads 1 and 2 - Why? Illustration adopted from Culler, Singh, Gupta A common metaphor: snail mail Message passing (implementation) - Hardware need not implement system-wide loads and stores to execute message passing programs (it need only communicate messages between nodes). - Can connect commodity systems together to form a large parallel machine (message passing is a programming model for clusters and supercomputers). Keep in mind (again): programming model abstraction is distinct from its implementation - Common to implement message passing abstractions on machines that implement a shared address space in hardware - “Sending message” = copying memory from message library buffers - “Receiving message” = copy data from message library buffers - Can implement shared address space abstraction on machines that do not support it in HW (via less efficient SW implementations) - OS marks all pages with shared variables as invalid - OS page-fault handler issues appropriate network requests - Keep clear in your mind: what is the programming model (abstractions used to specify program)? And what is the HW implementation? The data-parallel model Programming models provide a way to think about the organization of parallel programs (by imposing structure) - **Shared address space**: very little structure to communication - All threads can read and write to all shared variables - **Message passing**: communication is structured in the form of messages - All communication occurs in the form of messages (communication is explicit in source code—the sends and receives) - **Data parallel structure**: more rigid structure to computation - Perform same function on elements of large collections Data-parallel model * - Organize computation as operations on sequences of elements - e.g., perform same function on all elements of a sequence - Historically: same operation on each element of an array - Matched capabilities SIMD supercomputers of 80’s - Connection Machine (CM-1, CM-2): thousands of processors, one instruction decode unit - Early Cray supercomputers were vector processors - \texttt{add(A, B, n)} ← this was one instruction on vectors A, B of length n - A well-known modern example: NumPy: \( C = A + B \) (A, B, and C are vectors of same length) * We’ll have multiple lectures in the course about data-parallel programming and data-parallel thinking: this is just a taste Key data type: sequences - Ordered collection of elements - For example, in a C++ like language: Sequence<T> - e.g., Scala lists: List[T] - In a functional language (like Haskell): seq T - Program can only access elements of sequence through specific operations **Map** - Higher order function (function that takes a function as an argument) that operates on sequences - Applies side-effect free unary function \( f : : a \rightarrow b \) to all elements of input sequence, to produce output sequence of the same length - In a functional language (e.g., Haskell) - `map :: (a -> b) -> seq a -> seq b` - In C++: ```cpp template<class InputIt, class OutputIt, class UnaryOperation> OutputIt transform(InputIt first1, InputIt last1, OutputIt d_first, UnaryOperation unary_op); ``` Parallelizing map - Since \( f : : a \rightarrow b \) is a function (side-effect free), then applying \( f \) to all elements of the sequence can be done \textbf{in any order} without changing the output of the program. - The implementation of map has flexibility to reorder/parallelize processing of elements of sequence however it sees fit. Optimizing data movement in map - Consider code that performs two back-to-back maps (like that to left) - An optimizing compiler or runtime can reorganize code (bottom-left) to eliminate memory loads and stores (“map fusion”) - Additional optimizations: highly optimized implementations of map can also perform optimizations like prefetching next element of input sequence (to hide memory latency) - Think to yourself: why are these complex optimizations possible? ```cpp const int N = 1024; Sequence<float> input(N); Sequence<float> tmp(N); Sequence<float> output(N); map(foo, input, tmp); map(bar, tmp, output); parallel_for(int i=0; i<N; i++) { output[i] = bar(foo(input[i])); } ``` Data parallelism in ISPC // main C++ code: const int N = 1024; float* x = new float[N]; float* y = new float[N]; // initialize N elements of x here absolute_value(N, x, y); // ISPC code: export void absolute_value( uniform int N, uniform float* x, uniform float* y) { foreach (i = 0 ... N) { if (x[i] < 0) y[i] = -x[i]; else y[i] = x[i]; } } foreach construct Think of loop body as a function Given this program, it is reasonable to think of the program as using foreach to “map the loop body onto each element” of the arrays X and Y. But if we want to be more precise: a sequence is not a first-class ISPC concept. It is implicitly defined by how the program has implemented array indexing logic in the foreach loop. (There is no operation in ISPC with the semantic: “map this code over all elements of this sequence”) Data parallelism in ISPC // ISPC code: export void absolute_repeat( uniform int N, uniform float* x, uniform float* y) { foreach (i = 0 ... N) { if (x[i] < 0) y[2*i] = -x[i]; else y[2*i] = x[i]; y[2*i+1] = y[2*i]; } } Think of loop body as a function The input/output sequences being mapped over are implicitly defined by array indexing logic This is also a valid ISPC program! It takes the absolute value of elements of x, then repeats it twice in the output array y (Less obvious how to think of this code as mapping the loop body onto existing sequences.) Data parallelism in ISPC // main C++ code: const int N = 1024; float* x = new float[N]; float* y = new float[N]; // initialize N elements of x shift_negative(N, x, y); 思之为循环体为一个函数 输入/输出序列被映射的输入/输出序列由数组索引逻辑隐式定义 // ISPC code: export void shift_negative( uniform int N, uniform float* x, uniform float* y) { foreach (i = 0 ... N) { if (i >= 1 && x[i] < 0) y[i-1] = x[i]; else y[i] = x[i]; } } 输出该程序的输出是未定义的! 可能多次循环体的迭代写入同一内存位置 数据并行模型(foreach)提供没有指定迭代之间发生的顺序的描述 但数据并行模型提供了没有提供细粒度的互斥/同步的原语。 它不是帮助程序员写具有该结构的程序 Gather/scatter: two key data-parallel sequence operations Map absolute_value() onto stream produced by gather: ```cpp const int N = 1024; Sequence<float> input(N); Sequence<int> indices; Sequence<float> tmp_input(N); Sequence<float> output(N); stream_gather(input, indices, tmp_input); absolute_value(tmp_input, output); ``` ISPC equivalent: ```cpp export void absolute_value( uniform float N, uniform float* input, uniform float* output, uniform int* indices) { foreach (i = 0 ... n) { float tmp = input[indices[i]]; if (tmp < 0) output[i] = -tmp; else output[i] = tmp; } } ``` Map absolute_value() onto stream, scatter results: ```cpp const int N = 1024; Sequence<float> input(N); Sequence<int> indices; Sequence<float> tmp_output(N); Sequence<float> output(N); absolute_value(input, tmp_output); stream_scatter(tmp_output, indices, output); ``` ISPC equivalent: ```cpp export void absolute_value( uniform float N, uniform float* input, uniform float* output, uniform int* indices) { foreach (i = 0 ... n) { if (input[i] < 0) output[indices[i]] = -input[i]; else output[indices[i]] = input[i]; } } ``` Gather instruction gather(R1, R0, mem_base); "Gather from buffer mem_base into R1 according to indices specified by R0." Array in memory with (base address = mem_base) Index vector: R0 Result vector: R1 Gather supported with AVX2 in 2013 But AVX2 does not support SIMD scatter (must implement as scalar loop) Scatter instruction exists in AVX512 Hardware supported gather/scatter does exist on GPUs. (still an expensive operation compared to load/store of contiguous vector) Summary: data-parallel model - Data-parallelism is about imposing rigid program structure to facilitate simple programming and advanced optimizations - Basic structure: map a function onto a large collection of data - Functional: side-effect free execution - No communication among distinct function invocations (allow invocations to be scheduled in any order, including in parallel) - Other data parallel operators express more complex patterns on sequences: gather, scatter, reduce, scan, shift, etc. - This will be a topic of a later lecture - You will think in terms of data-parallel primitives often in this class, but many modern performance-oriented data-parallel languages do not enforce this structure in the language - Many languages (like ISPC, CUDA, etc.) choose flexibility/familiarity of imperative C-style syntax over the safety of a more functional form Summary Summary - Programming models provide a way to think about the organization of parallel programs. - They provide abstractions that permit multiple valid implementations. - *I want you to always be thinking about abstraction vs. implementation for the remainder of this course.* Summary Restrictions imposed by these abstractions are designed to: 1. Reflect realities of parallelization and communication costs to programmer (help a programmer write efficient programs) - Shared address space machines: hardware supports any processor accessing any address - Messaging passing machines: hardware may accelerate message send/receive/buffering - Desirable to keep “abstraction distance” low so programs have predictable performance, but want abstractions to be high enough for code flexibility/portability 2. Provide useful information to implementors of optimizing compilers/runtimes/hardware to help them efficiently implement programs using these abstractions - Consider optimizations possible when implementing ISPC foreach vs higher-order map Modern practice: mixed programming models - Use shared address space programming within a multi-core node of a cluster, use message passing between nodes - Very common in practice - Offer convenience of shared address space where it can be implemented efficiently (within a node), require explicit communication elsewhere - Data-parallel-ish programming models often support shared-memory style synchronization primitives in functions - e.g., CUDA, OpenCL - In a future lecture... CUDA/OpenCL use data-parallel model to scale to many cores, but adopt shared-address space model allowing threads running on the same core to communicate. Questions to consider - Programming models enforce different forms of structure on programs. What are the benefits of data-parallel structure? - With respect to the goals of efficiency/performance... what do you think are problems of adopting a very high level of abstraction in a programming system? - What about potential benefits? - Choose a popular parallel programming system (for example Hadoop, Spark, or Cilk) and try and describe its programming model (how are communication and execution expressed?)
{"Source-Url": "http://cs149.stanford.edu/fall20content/media/progmodels/03_progmodels_E58eFr0.pdf", "len_cl100k_base": 8264, "olmocr-version": "0.1.50", "pdf-total-pages": 63, "total-fallback-pages": 0, "total-input-tokens": 100907, "total-output-tokens": 10895, "length": "2e13", "weborganizer": {"__label__adult": 0.0004193782806396485, "__label__art_design": 0.0003943443298339844, "__label__crime_law": 0.00034737586975097656, "__label__education_jobs": 0.00335693359375, "__label__entertainment": 8.511543273925781e-05, "__label__fashion_beauty": 0.00016796588897705078, "__label__finance_business": 0.00019943714141845703, "__label__food_dining": 0.0004773139953613281, "__label__games": 0.0008096694946289062, "__label__hardware": 0.0019474029541015625, "__label__health": 0.0004878044128417969, "__label__history": 0.0003204345703125, "__label__home_hobbies": 0.0001569986343383789, "__label__industrial": 0.0007777214050292969, "__label__literature": 0.0003044605255126953, "__label__politics": 0.0003287792205810547, "__label__religion": 0.0007605552673339844, "__label__science_tech": 0.02716064453125, "__label__social_life": 0.00013828277587890625, "__label__software": 0.0040130615234375, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.00044655799865722656, "__label__transportation": 0.001163482666015625, "__label__travel": 0.0002779960632324219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32369, 0.01459]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32369, 0.70602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32369, 0.7616]], "google_gemma-3-12b-it_contains_pii": [[0, 139, false], [139, 334, null], [334, 368, null], [368, 585, null], [585, 1231, null], [1231, 2011, null], [2011, 3296, null], [3296, 3883, null], [3883, 5295, null], [5295, 5470, null], [5470, 6323, null], [6323, 7439, null], [7439, 7785, null], [7785, 9000, null], [9000, 10091, null], [10091, 11286, null], [11286, 12071, null], [12071, 12623, null], [12623, 13600, null], [13600, 13781, null], [13781, 14176, null], [14176, 14692, null], [14692, 14793, null], [14793, 14820, null], [14820, 15111, null], [15111, 15582, null], [15582, 15843, null], [15843, 16318, null], [16318, 16435, null], [16435, 16835, null], [16835, 17824, null], [17824, 18212, null], [18212, 18746, null], [18746, 18979, null], [18979, 19115, null], [19115, 19498, null], [19498, 19630, null], [19630, 19787, null], [19787, 20132, null], [20132, 20702, null], [20702, 20741, null], [20741, 21207, null], [21207, 21237, null], [21237, 21567, null], [21567, 22284, null], [22284, 22308, null], [22308, 22867, null], [22867, 23578, null], [23578, 23842, null], [23842, 24401, null], [24401, 24746, null], [24746, 25440, null], [25440, 26300, null], [26300, 26937, null], [26937, 27514, null], [27514, 28772, null], [28772, 29254, null], [29254, 30140, null], [30140, 30148, null], [30148, 30428, null], [30428, 31210, null], [31210, 31855, null], [31855, 32369, null]], "google_gemma-3-12b-it_is_public_document": [[0, 139, true], [139, 334, null], [334, 368, null], [368, 585, null], [585, 1231, null], [1231, 2011, null], [2011, 3296, null], [3296, 3883, null], [3883, 5295, null], [5295, 5470, null], [5470, 6323, null], [6323, 7439, null], [7439, 7785, null], [7785, 9000, null], [9000, 10091, null], [10091, 11286, null], [11286, 12071, null], [12071, 12623, null], [12623, 13600, null], [13600, 13781, null], [13781, 14176, null], [14176, 14692, null], [14692, 14793, null], [14793, 14820, null], [14820, 15111, null], [15111, 15582, null], [15582, 15843, null], [15843, 16318, null], [16318, 16435, null], [16435, 16835, null], [16835, 17824, null], [17824, 18212, null], [18212, 18746, null], [18746, 18979, null], [18979, 19115, null], [19115, 19498, null], [19498, 19630, null], [19630, 19787, null], [19787, 20132, null], [20132, 20702, null], [20702, 20741, null], [20741, 21207, null], [21207, 21237, null], [21237, 21567, null], [21567, 22284, null], [22284, 22308, null], [22308, 22867, null], [22867, 23578, null], [23578, 23842, null], [23842, 24401, null], [24401, 24746, null], [24746, 25440, null], [25440, 26300, null], [26300, 26937, null], [26937, 27514, null], [27514, 28772, null], [28772, 29254, null], [29254, 30140, null], [30140, 30148, null], [30148, 30428, null], [30428, 31210, null], [31210, 31855, null], [31855, 32369, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32369, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32369, null]], "pdf_page_numbers": [[0, 139, 1], [139, 334, 2], [334, 368, 3], [368, 585, 4], [585, 1231, 5], [1231, 2011, 6], [2011, 3296, 7], [3296, 3883, 8], [3883, 5295, 9], [5295, 5470, 10], [5470, 6323, 11], [6323, 7439, 12], [7439, 7785, 13], [7785, 9000, 14], [9000, 10091, 15], [10091, 11286, 16], [11286, 12071, 17], [12071, 12623, 18], [12623, 13600, 19], [13600, 13781, 20], [13781, 14176, 21], [14176, 14692, 22], [14692, 14793, 23], [14793, 14820, 24], [14820, 15111, 25], [15111, 15582, 26], [15582, 15843, 27], [15843, 16318, 28], [16318, 16435, 29], [16435, 16835, 30], [16835, 17824, 31], [17824, 18212, 32], [18212, 18746, 33], [18746, 18979, 34], [18979, 19115, 35], [19115, 19498, 36], [19498, 19630, 37], [19630, 19787, 38], [19787, 20132, 39], [20132, 20702, 40], [20702, 20741, 41], [20741, 21207, 42], [21207, 21237, 43], [21237, 21567, 44], [21567, 22284, 45], [22284, 22308, 46], [22308, 22867, 47], [22867, 23578, 48], [23578, 23842, 49], [23842, 24401, 50], [24401, 24746, 51], [24746, 25440, 52], [25440, 26300, 53], [26300, 26937, 54], [26937, 27514, 55], [27514, 28772, 56], [28772, 29254, 57], [29254, 30140, 58], [30140, 30148, 59], [30148, 30428, 60], [30428, 31210, 61], [31210, 31855, 62], [31855, 32369, 63]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32369, 0.02375]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
5dcab0fd47238760a8bc8042a231024d9d4b768d
University of Huddersfield Repository Simpson, R.M., Kitchin, Diane E. and McCluskey, T.L. Planning domain definition using GIPO Original Citation This version is available at http://eprints.hud.ac.uk/495/ The University Repository is a digital collection of the research output of the University, available on Open Access. Copyright and Moral Rights for the items on this site are retained by the individual author and/or other copyright owners. Users may access full items free of charge; copies of full text items generally can be reproduced, displayed or performed and given to third parties in any format or medium for personal research or study, educational or not-for-profit purposes without prior permission or charge, provided: - The authors, title and full bibliographic details is credited in any copy; - A hyperlink and/or URL is included for the original metadata page; and - The content is not changed in any way. For more information, including our policy and submission procedure, please contact the Repository Team at: E.mailbox@hud.ac.uk. http://eprints.hud.ac.uk/ Planning domain definition using GIPO R. M. SIMPSON, D. E. KITCHIN and T. L. McCLUSKEY School of Computing and Engineering, The University of Huddersfield, Huddersfield HD1 3DH, UK; e-mail: r.m.simpson@hud.ac.uk Abstract In this paper an object-centric perspective on planning domain definition is presented along with an overview of GIPO (graphical interface for planning with objects), a supporting tools environment. It is argued that the object-centric view assists the domain developer in conceptualizing the domain’s structure, and we show how GIPO enables the developer to capture that conceptualization at an appropriate and matching conceptual level. GIPO is an experimental environment which provides a platform for exploring and demonstrating the range and scope of tools required to support the knowledge engineering aspects of creating and validating planning systems, both for classical pre-condition planning and hierarchical planning. GIPO embodies the object-centric view, leading to a range of benefits typically associated with object-oriented methods in other fields of software engineering such as highly visual development methods, code reuse and efficient, reliable development. 1 Introduction This article postulates an object-centric medium for the formulation of planning domain definitions, and describes GIPO (graphical interface for planning with objects), a tools environment which embodies this approach. Our work is concerned with simplifying the task faced by knowledge engineers when developing problem and domain definitions suitable for use with domain-independent planning software. The aspect of knowledge engineering supported by the object-centric approach is that of the formulation of the problem scenario. Its use presupposes that a problem scenario appropriate for the deployment of domain-independent planning technology has already been identified and analysed, but has not been formally encoded in a planning specification language. Formulating the domain model normally requires great skill and understanding of the specification language. GIPO is an experimental research environment, used both as a research platform and in education. It has been used to support teaching of artificial intelligence (AI) planning at undergraduate level. The use in education has motivated the development of a succession of revisions each introducing higher level conceptualizations and visualization of planning domain knowledge. Our experience using GIPO in teaching indicates that it simplifies the task of grasping the structure of existing planning domains and the task of creating and validating new domain definitions (McCluskey & Simpson, 2005). In order to use GIPO one must conceptualize the planning problem as involving a set of objects, where each object is a member of one type — here called the object’s ‘sort’. For each sort, a set of states is specified such that each object of that sort must occupy exactly one of the states. Additionally, objects of each sort may have a list of properties describing them. Consequently, domains can be formulated by defining the possible transitions that can occur within each sort and the properties of each sort. Then plan execution involves executing actions which change the state and properties of these objects. This is illustrated in the Dock Worker Robots (DWR) example, a domain in which automated cranes and robot trucks manage and transport containers around a shipping dock port (for a detailed description of the DWR example see Ghallab et al., 2004). The domain can be modelled by describing the changes that can happen to the object sorts, that is, the robots, the containers and the cranes. Traditionally, creating specifications for AI planning domains involved the author focusing on the actions available to solve given problem instances. These actions are modelled by parameterized structures called action (or operator) schema (we use the term actions in this paper). In the object-centric view, the focus is on the possible significant changes of state that the objects populating the problem scenario can make. In the DWR world the focus changes from considering, for example, how the ‘lifting’ of the robot arm may be specified to describing the states of the robot arm as ‘free’ or ‘busy’, and the states of the containers as ‘lifted’ or ‘stacked on a container pile’ or ‘loaded on a ship’. Action definitions are then synthesized from the component descriptions of the changes that occur to the individual objects. In this way, the object-centric method provides guidance on how the author can create action definitions. In this manner the domain definition task has, we believe, been decomposed into smaller more manageable sub-tasks. The object-centric view is also capable of being captured by state machine representations which can be a useful aid in understanding. A method for formulating domain definitions based on the object-centric idea was introduced by McCluskey et al. (1996); McCluskey & Porteous, (1997), and was linked to the development of the definition language OCL, later refined into OCLh (McCluskey & Kitchin, 1998). The object-centric approach has its roots in OCL but is capable of being lifted above the particular language that the definition is encoded in. Tool support enables the domain definition to be translated into other languages, principally PDDL, the dominant language used for the communication of domain definitions in AI planning (Ghallab et al., 1998; Fox & Long, 2001). The current version of GIPO has two major operating modes. The standard mode allows the creation of classical planning domains. The internal representation allows the capture of domains of the complexity of those describable in PDDL version 1.7 without the use of any hierarchical task network (HTN) features. The second major mode enables HTN planning and is supported by the HyHTN planner (McCluskey et al., 2003). In both modes the tool set contains graphical editors to assist in the creation of the domains, built-in planners to solve developed problems and animators to graphically inspect the plans produced. Manual steppers are provided by GIPO in both modes to assist in dynamically validating domain specifications. The steppers allow the user to create plans for well-understood example problems and inspect points of failure in cases where no plan (or no correct plan) is generated by the available planners. GIPO has an open API to link public domain AI planning engines to the system. Planning systems that process PDDL domain and problem descriptions from a command line interface can be executed by planner-specific scripts from within the GIPO environment. After a plan is generated it is returned to GIPO and loaded into the plan animator allowing the solution to be visualized. We aim to develop GIPO in step with the expressiveness of standard AI planner domain description languages to preserve this external planner link. This paper presents an introduction to the underlying philosophy of the ‘object-centric’ view and shows how it supports a visual state machine type representation. We show the essential structure of the state machine representation and demonstrate how it maps to PDDL domain definitions. In the second part of the paper, an overview of the scope of the GIPO tool set is presented and we show how it capitalizes on the underlying ‘object-centric’ view. 2 The object-centric view 2.1 Ontological assumptions of the object view The basic assumption of the object view is that within any problem scenario that presents a planning problem there will be objects that are changed in some way during the execution of plans. This set of dynamic objects can be partitioned into subsets such that each member of a subset is distinguishable, relative to the planning task, by name only. Each object belonging to such a subset is capable of making the same changes. For each of these dynamic subsets of objects, which we call sorts (deriving from many-sorted logics; Manzano, 1993) the primary changes they undergo can be described by identifying named states that they change between. Additionally, for each such sort there may be properties, which are functions on an object’s state, that characterize the sort’s individuals. These properties may be static in the sense that they do not change during plan execution or they may be dynamic and are subject to change. Changes either to an object’s state or to its dynamic properties are brought about by actions that can be controlled by the planning executive. A change may be accompanied by a constraint that requires the properties of the object to meet some condition. The distinction between what is classified as a named state and what as a property is to some extent pragmatic. In many cases the distinction will be intuitively obvious. A ‘robot truck’ may be described as having states ‘available’ or ‘out of service’ but have the property of being located. Factoring out properties of objects allows for a more succinct state machine representation of an object’s ‘life history’ as described in the following text. Given the above categorization of planning domain scenarios, we can represent the possible changes objects may make during plan execution with a form of finite state machine. Consider an object $o_1$ of sort $O$ capable of being in state $s_1$ or $s_2$. Objects of sort $O$ have a property $P$ that can take on the values $p_1$, $p_2$ or $p_3$. Let us assume that an object of sort $O$ can change between states $s_1$ and $s_2$ by performing action $a_1$, and between $s_2$ and $s_1$ by performing action $a_2$, and that during these changes the property $P$ does not change. Such a scenario could be depicted by three disconnected state machines. Further, if we allow action $a_3$ to change the property $P$ from $p_1$ to $p_2$ or from $p_2$ to $p_3$ or from $p_3$ to $p_1$, the three connected state machines could be shown as in Figure 1. If we have multiple objects of sort $O$ then their potential changes would each be shown on a structurally identical state machine. Such forests of state machine diagrams are clearly unwieldy but they can be simplified into a compact form: we can use one diagram to represent each object of the same sort, and we can remove the duplication resulting from different property values. The result of such simplification allows us to represent the same information in the state machine shown in Figure 2(a), where the constraint on action $a_3$ is given by the relation, $\text{next}(p_1,p_2),\text{next}(p_2,p_3),\text{next}(p_3,p_1)$. In this diagram, we present along side the abstract machine a possible realization within the DWR domain where we have robots that require to be enabled before they can move between adjacent locations. The arrows labelled only by action names are assumed to leave properties of the objects unchanged. The ‘abstract state machine’ diagrams form the basis of the history view of planning domains. We call such diagrams ‘object life history diagrams’. The propositional description of the domain can be derived from the diagrams. 2.2 Deriving propositional descriptions from state diagrams Propositional descriptions of domains may be easily derived from abstract state machines. The described domain fragment and shown in Figure 2(a) supports two PPDL types $o$ and $p$ for the dynamic objects and their properties. The predicates are then formed to identify the states of ![Figure 1 State machine view](image-url) the dynamic objects and to associate each object with the current value of its property. The property $P$ is given some appropriate name and a type chosen for its potential values. In this example, the name and type are ‘prop’ and ‘p’, respectively. The next constraint, which limits the range of property changes, is defined by instances of the ‘next’ predicate. In the translations that follow we show both the translation of the abstract example and the corresponding example drawn from the DWR domain. Figure 2 Abstract state machine view (a), Realization in DWR domain (b) In this way we see that thinking of the domain’s dynamic objects as embodying state machines has given us the propositional descriptions (Listing 1). The action descriptions also follow as easily. For the actions such as $a1$ and $a2$ that only involve state changes the action description simply requires a precondition that the object is in the source state and has the effect that the object is asserted to be in the target state and no longer in the source state. Listing 2 shows the PDDL definition for the actions (actions) $a1$ and enable from DWR. Listing 1 Propositions and types In this way we see that thinking of the domain’s dynamic objects as embodying state machines has given us the propositional descriptions (Listing 1). The action descriptions also follow as easily. For the actions such as $a1$ and $a2$ that only involve state changes the action description simply requires a precondition that the object is in the source state and has the effect that the object is asserted to be in the target state and no longer in the source state. Listing 2 shows the PDDL definition for the actions (actions) $a1$ and enable from DWR. Listing 2 Simple state changing actions Actions involving property changes, such as $a3$ (Listing 3), in addition to referencing the state predicate in the precondition also require that the object’s property $prop$ has a value that appears in the constraint clause next. The effect list retracts that value of the property and asserts the new value as dictated by the next predicate. The instances of the next predicate are given in PDDL as part of a problem definition. Listing 3 Actions that change property values At this level it is a simple extension to allow an action to bring about both state changes and property changes. It is also straightforward to allow objects to have multiple properties. 2.3 Combining object state machines Domain definitions normally involve changes to objects drawn from different dynamic sorts, and the changes made in the life history of one object sort may be dependent on coordinating with the states and changes made to some other object sorts. We will illustrate the ways in which state machines may combine with reference to Figure 3 which introduces a second state machine for objects of sort $O_2$. This state machine is structurally identical to that in Figure 2. Again, we also give an example drawn from the DWR domain. We describe variations on three different combinations where coordination among state machines occur. 1. Prevail requires that for an object of sort $O$ to make a transition, some object, normally of some other sort $O_2$, must be in a required state, and remains in that state during action execution. 2. Necessary combinations require two or more objects, normally of different sorts to make transitions simultaneously. 3. Conditional combinations require one object, if in the appropriate state, to make a transition only if a second object, normally of a different sort, makes a specified transition. 2.3.1 Prevail combinations In Figure 3(a) the action $a_1$ for sort $O$ may require that an object instance of sort $O_2$ is in state $s_{21}$ and remains in that state. This requires a parameter for an object of type $O_2$ to be added to the action definition and also that the predicate asserting that the object instance? $O_2$ is in state $s_{21}$ is added to the actions precondition as shown in Listing 4. ``` (defun load ( ?O - o, ?O2 - o2) (and (s1 ?O) (s21 ?O2) (not (s1 ?O)) (not (held ?O)) (s2 ?O) (loaded ?O) ) ``` ``` (defun a1 ( ?O - o, ?O2 - o2) (and (s1 ?O) (s21 ?O2) (held ?O) (not (s1 ?O)) (not (held ?O)) (s2 ?O) (loaded ?O) ) ``` Listing 4 Actions requiring a prevail combination The prevail condition can be more complex in that the connection may set up an enduring association between the object of sort $O$ and the object of sort $O_2$. This, for example, could happen in the DWR domain when a container is loaded onto a robot truck. It must be remembered onto which truck the container is loaded so that it can eventually be unloaded from the same truck. The association needs to be remembered in all the states reachable as a result of performing the action until such time that the association is explicitly ended. In the DWR example the unload action will explicitly end the association. To capture such associations diagrammatically we can add connecting arrows suitably annotated to show the intention, as in Figure 4, and this is what is done in the object life history editor (OLHE) of GIPO. In terms of the propositional code for the resulting action $a_1$, the association is captured by adding an extra argument to the target state of the action. In fact an extra argument will be added to all states reachable from the action and removed only when an action explicitly ends the association. In this example the state predicate for the state $s_2$ has been modified to include an argument for a value of type $O_2$. This definition of the state predicate replaces the old one and must be used in all references to state $s_2$ (Listing 5). There are other features that can be present in a prevail condition. For example, the properties of the connected objects may be required to be coordinated in some way, and there may also be a need for multiple instances of object of sort $O_2$ to be associated with the action. In the DWR example above, the properties of ‘location’ for the container and robot would be required to have the same value. How we deal with such elaborations is fully described in the GIPO manual, but essentially it involves annotating the arrows showing the connection between actions and states. ### 2.3.2 Necessary combinations A second major way in which the actions of differing object sorts may need to be coordinated is where transitions from differing sorts both refer to the same action. It may, for example, be the case that both actions $a_1$ and $a_21$ are required to occur together. From the perspective of the planning executive they may refer to the same action. This is easily accomplished at the propositional level where the bodies of the two distinct actions are combined into a single description and the result given a common action name (Listing 6). --- **Figure 4** Multi sort state machines showing prevail connection. Abstract (a) DWR (b) **Listing 5** Actions requiring prevail combinations with remembered association ```prolog (:action a1 :parameters (?O - o, ?O2 - o2) :precondition (and (s1 ?O) (s21 ?O2)) :effect (and (not (s1 ?O)) (s2 ?O ?O2)) ) (:action load :parameters (?C - container, ?R - robot) :precondition (and (held ?C) (disabled ?R)) :effect (and (not (held ?C)) (loaded ?C ?R)) ) ``` --- (r.m. simpson et al. 122) A second example might be where the self-loop transitions a3 and a23, as in Figure 4, are required to occur together. Both have the property changing restriction that requires their declared property 'prop' to change value as dictated by the 'next' propositional restriction. In terms of the DWR example we could require that the transport and move actions occur together, and the container and robot to make the same change in location. We show such restrictions on the diagrams by using double-headed arrows connecting the specified transitions. As with prevail conditions, necessary conditions can be further elaborated to allow associations to be created and ended and also to allow the properties of the connected objects to be required to match in some way. 2.3.3 Conditional combinations Conditional combinations occur where the primary transition is joined with a secondary such that the primary transition may occur without the secondary but not the secondary without the primary. If the above-described connection between a1 and a2 was a conditional combination rather than a necessary combination, the resulting action would be as seen in Listing 7. | Listing 7 | Action requiring conditional coordination | The conditional combination requires that every object capable of making the secondary transition, that is, where the precondition is met, must make the transition to the new state. The secondary transition cannot be made if there is no accompanying object making the primary transition. In the DWR example, the connections between the 'transport' action of the container and 'move' action of the robot are most plausibly treated as a conditional combination. The robot may move without any container being loaded, whereas the container may only be transported once loaded on a robot. The full move action (automatically generated by GIPO) with the restriction that the robot can only move to adjacent locations is shown in Listing 8. 3 The GIPO environment Our intention in creating the GIPO Environment is not simply to develop a tool for the creation of planning domains in the internal object-centric representation language, but to promote the tool as a modelling tool irrespective of the final target language. The overall architecture of the environment is shown in Figure 5. Central to GIPO is the object-centric internal representation of domains which is manipulated by all major tool elements. The environment contains a range of domain acquisition tools and associated static validation routines to promote the accuracy of the formulation. The global static validation tool can be used to report on likely faults and omissions in the model. Once a model appears to be acceptable the plan stepper and plan animator, with the associated internal planners, can be used to further dynamically check the model. To enable GIPO to be used as a general domain modelling tool we have developed translators between our internal language \texttt{OCLh} and PDDL (Simpson \textit{et al.}, 2000). We also provide an API to enable external planning systems to interface to the tools to provide scope for testing and fielding alternative planning algorithms to those internal to GIPO. Currently the interface allows planners which can input OCL or typed and optionally conditional PDDL. As an example, we have successfully tested the embedding of FF version 2.3 (Hoffmann, 2000) and LPG version 2.1 (Gerevini & Serina, 2002) into GIPO to allow running the planners on selected problems and viewing output in tools such as the plan animator. There is no requirement to amend any of the distributed code for third party planners: pre- and post-processing scripts take care of differences among individual systems. 3.1 Initial domain definition within GIPO GIPO provides a range of graphical editors to enable the initial creation of domain definitions. To use the basic editors the user follows the ‘Domain Definition Methodology’ as presented in the GIPO tutorials. These basic editors closely follow the structure of domains expressed in \texttt{OCL} or \texttt{OCLh}, where the user must first name the classes of objects which can participate in the problem domain. The user, in sequence, defines the predicates that are used to describe object instances, and defines object class states, which characterize the legal combinations of predicates that may be used to describe object instances. The concluding steps are to define the domain actions and HTN methods, if appropriate. Problem instances can then be defined and dynamic domain testing carried out. The basic method of domain editing, although removing the need to have a deep understanding of the domain formulation at a textual level, has been superceded by the OLHE as described in the next subsection. A major limitation of the basic method is that it does not naturally give any guidance as to how the appropriate predicates should be chosen to describe the object types in the domain. To provide a rationale for the choice of predicates, the user needs to be provided \begin{verbatim} \end{verbatim} \textbf{Listing 8} DWR action requiring conditional connection with a guide to the potential role that the predicates can play in the description of the objects. Creating object life histories focuses the user on such roles, rather than on the predicates required to describe them. 3.2 Object life histories in GIPO The OLHE allows the user to draw state machines that describe the domain’s dynamic object classes. GIPO then automatically generates the domain definition from those diagrams. In Figure 6(a) we show the state machine for the crane as modelled in the DWR example. States are shown in rectangles with appropriate icons and state names, where as state transitions are shown as roundtangles and labelled with the name of an action that would bring about the change in state as shown by the transition arrows. The figure shows that a crane is in one of two states and that there are two different actions that can trigger state changes from either state to the other. In Figure 6(b) we show the robot state machine and show how we diagrammatically represent the ability of a robot to change location by driving along the dock track. The ‘moveTo’ and ‘takeTo’ transitions are property changing transitions, which are distinguished from other transitions by colour. When a robot makes one of these transitions, it will change property value — in this case the location of the robot. Viewing the property inspector in the editor (Figure 7) reveals the properties associated with the objects and any constraints placed on property changes. An indication of how the connections are shown between the state machines for the different dynamic sorts is given in Figure 8, where the complete model for the DWR domain is presented. ### 3.3 Scaling to large domains Using the basic OHLE, designing a large domain specification at the level of charting every object transition and all connections among them is still a complex task. We do believe, however, that the visualizations greatly expedite the task of domain definition. To assist further in providing visualizations that are easy to grasp, we provide features such as the ability to selectively view part of the emerging domain and switch on and off connections linking the different sort state diagrams. More importantly, to aid both visualization and reuse, GIPO provides methods to allow some of the complexity to be encapsulated in higher order structures. Such higher order structures form ‘packages’. We need this both to simplify diagrams to allow the essential structure of the domain to be more easily envisaged, and to allow for reuse of complex but often repeated structural elements. We require reuse for different object types within the same domain and across multiple domains. GIPO provides mechanisms to allow domain developers to isolate diagrams which may be formulated into package structures. These provide a public interface to private substructures and store these in a library for reuse. Packages are used in the completed DWR model as shown in Figure 8. In this diagram the states of the container while on a stack are encapsulated in the package ‘onStack’. By constructing complex state machines and showing how action transitions coordinate, complete domain definitions can be built up. The textual representation of the domain is generated automatically from the diagram. To produce a testable domain all the user needs do is to add the information to create problem instances. GIPO provides support for this in the ‘Task Editor’. The user is presented with lists of predicates defining the possible states of each object class and is allowed to select possible values to instantiate both initial and goal states for tasks. This process is shown in Figure 9. 3.4 Opmaker To lower the threshold of prior knowledge required to develop planning domain models GIPO incorporates an action induction process, called opmaker (the reader is referred to McCluskey et al. (2002) for a more detailed description and evaluation of the tool). This tool is aimed at the knowledge engineer with good domain knowledge but weaker general knowledge of AI planning. Opmaker requires as input an initial structural description of the domain along with a well-understood training problem accompanied by an action sequence adequate to solve the training problem. In particular we assume that the modeller’s partial construction of the domain definition has reached the stage where there exists at least a class hierarchy, predicate and state definitions. This may have been done using either the basic editors of GIPO or by partially describing the domain using the OLHE. To run Opmaker the user must specify the training problem, using the task editor (see Figure 9). A task specification defines an initial state for every object in the problem and the desired state of a subset of these objects as the goal state to be achieved. The user now supplies opmaker with the training sequence of actions. An action is simply the chosen name for the action followed by the names of all objects that participate in the application of the action. A good sequence of actions would ideally include instances of all actions required in the domain, though this is not required by opmaker; the action set can be built up incrementally using different problem instances. A snapshot of an element of the dialogue carried out by opmaker to help infer action structure is shown in Figure 10. The strategy opmaker uses relies on the structural knowledge within the partial domain definition already specified. In particular for each type of object in the domain there will exist an abstract specification of each possible state that objects of that sort can be in. Opmaker works by stepping through the training example, advancing the state of each referenced object from the initial state to the next legal state by deducing the possible legal states of the affected objects referenced in the training action step. When there are multiple possible legal states that an object may advance to, the user is queried to determine which of the possible states the object should be in. This is shown in Figure 10, where actions are being derived for a domain with trains moving on a single line track. The drop-down list contains all possible legal states for the ‘track’ instance ‘t1’. Before the application of the ‘drive’ action the track segment ‘t1’ was ‘occupied’. The user should confirm in this case that the result of the action will be that ‘t1’ is now in the state ‘free’. Once the state transitions of the named object instances are known this information can be generalized to produce action schema. The derived action schema will be used in future uses of the action in the training sequence and may be refined in cases where the derived action only provides a partial match with the new instance. In this way opmaker steps through the training sequence, querying the user and advancing the state of each object referenced in the action schema until the training sequence is exhausted. 3.5 HTN planning GIPO provides editors and tools to support HTN planning as expressed in the OCL language (McCluskey & Kitchin, 1998). In brief, GIPO allows the definition of methods which are actions that can be specified in terms of a composition of actions as defined for classical planning. Primitive actions are the non-composite actions defined in GIPO as already described. HTN methods are defined in terms of three elements: 1. A declaration of the changes that the composite action guarantees to bring about for identified object types. 2. A definition of any precondition applying to any associated required object referenced in the guarantee. 3. A graph of subactions which if performed would bring about the guaranteed changes. This graph can contain nodes that we call ‘achieve goals’ that represent preconditions that may have to be achieved before a specific action in the subgraph can be carried out. In GIPO the method editor allows each of these segments to be represented in a graphical form. In Figure 11 we see a method called ‘carry_direct’ being defined. This is a method that might be used in a logistics type domain, where packages have to be delivered by a variety of forms of transportation. The top two roundtangles define the changes guaranteed when the package is transported. The guarantee requires that the package be in a state that would form a ground instance of the LHS roundtangle, namely that it is at some location \( O \) and that it is waiting and certified. The RHS or post condition states that the package will end up at a new destination \( D \). The bottom roundtangle expresses the precondition that the city of origin of the package and of the destination be the same. The graph of actions forming the decomposition of the method is shown in Figure 12. Both primitive actions and other methods can be used in the definition of a decomposition. Method definitions may be recursive. A decomposition may include pre-conditions that apply to the actions forming the decomposition. In Figure 12 the rectangle containing the predicate \( \text{at}(V,O) \) expresses the precondition that the vehicle used to load-package must be at the same location \( O \) as the package. To support HTN planning, GIPO contains a hybrid task-reduction planner called HyHTN. This planner is similar to SHOP (Nau et al., 1999) in that it is a state-advancing planner, but it is also capable of combining hierarchical decomposition with a state-space search using the plan graph heuristic. In experiments HyHTN performed well in comparison to SHOP (McCluskey et al., 2003). GIPO also contains an animator for plans produced by HyHTN, as well as an hierarchical plan stepper. A partial snapshot of the animator in use with a logistics domain is shown in Figure 13. ### 3.6 Static validation The validation of a domain definition cannot be done entirely automatically, although automated assistance in this task can be provided. Within an HTN GIPO can check that the transparency property (McCluskey et al., 2003) is not broken by any method definition. The transparency property gives a guarantee that if a method’s preconditions are met then the body of the method will bring about the method’s postconditions. The property is checked by performing abstract execution of a method’s decomposition body. Warnings are then displayed to the user if a step cannot be fulfilled, given the specified preconditions of the method. In Figure 14, we see a DWR style domain where an object is to be loaded from a gripper but where the gripper cannot be guaranteed to hold the object. Within classical domains the automatic checks that GIPO can carry out tend to be at a lower syntactic level, but absence of such problems as misspelt predicate names can still save the domain developer many hours of dynamic testing. The action schema analysis tool checks the usage of each of the defined states and predicates as they are referenced in the action’s components. This is particularly useful in conjunction with *Opmaker*. That a state definition is not referenced by any action would most likely indicate that the action coverage of the domain is still incomplete. Likewise, states that are only ever referenced in the precondition, or the postcondition, act as a potential indicator of incompleteness. When the OLHE is used, internal consistency checking is applied before generation of the domain definition. ### 3.7 Dynamic validation The most powerful facility that GIPO provides for dynamic validation of domains are the manual steppers. The role of the steppers is to allow the domain engineer to check that the domain specification does support known plans for well-understood problems within the domain. This may be checked by running planners with the known problems, but failure to find the plan may indicate a problem or limitation of the planner, rather than the domain specification. To check the domain independently of any particular planner, the plan needs to be manually produced, which is done using GIPO’s steppers. The stepper for classical planning works as a forward planner where the user selects the actions to solve the problem. As the application of each action is checked, the user can isolate the point where a domain definition fails to allow an action to be performed, where the user thinks the action should be allowed. The stepper is adept at helping the user discovering subtle bugs and their location within a domain definition. In Figure 15 a domain to test a model of multiple trains moving on a single line track is being stepped. The user is instantiating an instance of the ‘drive’ action to step through the growing plan. For HTN domains the stepper works in a top–down left to right mode. When a user selects a method as part of a plan the decomposition of that method must be manually stepped. The HTN stepper incrementally produces a diagram with a structure identical to that produced by the HTN plan animator as shown in Figure 13. 3.8 Implementation GIPO is largely written in Java and is hence platform independent. The integrated planners, including HyHTN, are written in Prolog as are some of the other tools. The GIPO distribution includes Sicstus Prolog run time environments to support the Prolog subsystem. External third party planners can be run from within GIPO if they have a command line interface that allows the specification of input domain and problem files and they process classical PDDL. Planner specific scripts are required to pre- and post-process the planner input and output to enable the planner to be fully integrated in the system. The GIPO distribution is binary though the Java sources are also made freely available. 4 Related work 4.1 Related tool sets Environments of the complexity of GIPO to support domain-independent planning have been created previously. Both SIPE (Wilkins, 1999, 2000) and O-Plan (Tate et al., 1994, 2005) are such environments. Both are large complex systems and have user interfaces designed to assist in the task of domain definition. Both systems, however, were designed to be highly coupled to their own built-in HTN planning engines. In contrast GIPO has an open API to link public domain planning engines to the system. Similar work to our own in knowledge acquisition and engineering tends to be aimed at general KBS rather than being specific to AI Planning. For example, systems such as those based on EXPECT (Blythe et al., 2001) or PROTEGE (Gennari et al., 2003) are more general purpose and do not aim at providing support to the very specific task of acquiring domain knowledge with a view to producing a formal specification as an output to be used with planning engines. A recent tools environment called itSIMPLE (Vacquero, Tonidanel & Silva, 2005) is, like GIPO, based on an object-centric perspective and aimed at the acquisition of planning knowledge. It includes tools for the acquisition and manipulation of domain definitions, but it differs in that it adopts the widely used software system modelling language UML as its underlying philosophy. This approach may well help to make AI planning techniques more accessible by using an approach that is well known to software engineers. However, it is yet to be shown that the use of the general UML framework would be appropriate to engineer the peculiarities of planning domain definitions. Edelkamp and Mehler’s ModPlan (Edelkamp & Mehler, 2005) is another recently developed tool which helps in planning knowledge acquisition and engineering. Their workbench includes a range of functions including static analysis, goal ordering generation and domain inference. Their work can be seen as complementary to ours, as the functions of ModPlan have the perspective of acquiring heuristics to aid in the efficiency of AI planners. The functions of GIPO, however, are aimed less at acquiring heuristics and more at acquiring and validating domain structure. 4.2 Representational issues The use of state machines to describe elements of planning domains is not new. State machines are used by Fox & Long (1997) in domain analysis to extract and describe useful structure from domain specifications with a view to enhance the efficiency of planning software. The novel angle of our work is to use state machines allied to the object-centric view to form a basis for the creation and systematic description of complete domain definitions. The work on analogical reasoning (Garagnani, 2004) bears some superficial similarities to this work, but differs in its purpose, in that it introduces a notation that is designed to be more efficient than symbolic notations. Further, Garagnani postulates a diagrammatic inter-lingua for domain definitions themselves, whereas diagrams in our work are used as an interface to help in the formulation of a symbolic definition. The diagrammatic formalism introduced in this paper clearly approximates in expressive power the classical planning representational forms such as STRIPS (Fikes & Nilsson, 1971) and SAS (Backstrom & Nebel, 1991). In the sections above we showed how the ‘life history view’ can be translated into PDDL which can be regarded as a STRIPS derivative. To demonstrate equivalent expressive power we need to demonstrate how domains in PDDL can be translated into life history diagrams. Informally it is easier to show how SAS encodings can be translated into life history diagrams, and as the equivalence of SAS and STRIPS encodings is already established in the literature (Nebel, 2000), such a translation would demonstrate that for classical domains the life history model is equivalent in expressive power. In outline, the object properties are functions on object states, but object states can be regarded as functions on the global state. To encode any SAS state, the variables of the state need to be partitioned with reference to object sorts and then each such variable is regarded either as an object state function or a property function on such object states. Our current diagrammatic formalism falls short of ADL-type languages in that explicit quantification over objects is not possible. This aspect is subject to ongoing research. GIPO’s internal representation language is based on previous work by McCluskey & Porteous (1997). The reader is referred to that reference for more information on the representation language and a further discussion of related work. 5 Conclusion and future work In this paper we have shown how an object-centric view can assist planning domain formulation. We have also shown how that view can be used to superimpose structure and guide specification in languages such as PDDL. Environments such as O-Plan (Tate et al., 1994, 2005) and SIPE-2 (Wilkins, 1999, 2000) amply demonstrate that complex tools environments are required to enable AI planning solutions to be adopted in organizational contexts. GIPO provides much of this support when deploying the range of currently available planning systems using either PDDL or OCL. GIPO is still under development. Its OLHE is at a beta level of release. We are still experimenting with the nature of the visualization and with the editing mechanisms to allow the life histories to be easily produced, edited and encapsulated into reusable library structures. Although GIPO’s OLHE is being used in undergraduate teaching, a full user evaluation is still to be carried out. Two main enhancements are desirable: 1. A significant engineering challenge in AI Planning is the efficient acquisition of HTN actions. Currently, the OLHE is restricted to non-hierarchical domains. A very useful development of the OLHE would be to enable it to capture HTN actions and hierarchical domain structure in general. 2. The scope of domains capable of being modelled within GIPO needs extending, to be able to keep pace with new versions of PDDL. There is a partial implementation of PDDL level 5 in the current release, together with an updated OLHE, but with reduced tool support. For example, currently we have no planner integrated into GIPO that can generate plans in such continuous domains. GIPO is available from http://scom.hud.ac.uk/planform/gipo. Acknowledgements We acknowledge the help of others in the GIPO project. Many of the Prolog tools and in particular the planner HyHTN were written by Donghong Lui. Weihong Zhao contributed significantly to the creation of the the Java interface. Other members of the planning team at the University of Huddersfield have contributed intellectually to its development as did members of the Planform project http://scom.hud.ac.uk/planform/. References McCluskey, TL., Richardson, NE. and Simpson, RM. 2002 An interactive method for inducing operator descriptions. In *The 6th International Conference on Artificial Intelligence Planning and Scheduling*. AAAI. McCluskey, TL., Liu, D. and Simpson, RM. 2003 GIPO II: HTN planning in a tool-supported knowledge engineering environment. In *The 13th International Conference on Automated Planning and Scheduling*. AAAI. Vacquero, TS., Tonidanel, F. and Silva, JR. 2005 The itSIMPLE tool for modelling planning domains. In *Proceedings of the First International Competition on Knowledge Engineering for AI Planning*. Monterey, California, USA.
{"Source-Url": "http://eprints.hud.ac.uk/495/1/McCluskeyPlanning.pdf", "len_cl100k_base": 9583, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 43571, "total-output-tokens": 11987, "length": "2e13", "weborganizer": {"__label__adult": 0.0003864765167236328, "__label__art_design": 0.000988006591796875, "__label__crime_law": 0.0004291534423828125, "__label__education_jobs": 0.005901336669921875, "__label__entertainment": 0.00015497207641601562, "__label__fashion_beauty": 0.0002741813659667969, "__label__finance_business": 0.0006194114685058594, "__label__food_dining": 0.00037288665771484375, "__label__games": 0.0009713172912597656, "__label__hardware": 0.0009398460388183594, "__label__health": 0.0005502700805664062, "__label__history": 0.0005903244018554688, "__label__home_hobbies": 0.0002446174621582031, "__label__industrial": 0.0009517669677734376, "__label__literature": 0.0006771087646484375, "__label__politics": 0.0003731250762939453, "__label__religion": 0.0005693435668945312, "__label__science_tech": 0.2000732421875, "__label__social_life": 0.00022304058074951172, "__label__software": 0.02777099609375, "__label__software_dev": 0.7548828125, "__label__sports_fitness": 0.0003342628479003906, "__label__transportation": 0.0013265609741210938, "__label__travel": 0.00027942657470703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51546, 0.02845]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51546, 0.69397]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51546, 0.90548]], "google_gemma-3-12b-it_contains_pii": [[0, 1257, false], [1257, 4589, null], [4589, 8953, null], [8953, 12789, null], [12789, 15041, null], [15041, 17358, null], [17358, 20216, null], [20216, 22175, null], [22175, 25687, null], [25687, 27175, null], [27175, 29049, null], [29049, 32359, null], [32359, 35353, null], [35353, 36663, null], [36663, 37315, null], [37315, 39708, null], [39708, 43877, null], [43877, 47475, null], [47475, 51546, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1257, true], [1257, 4589, null], [4589, 8953, null], [8953, 12789, null], [12789, 15041, null], [15041, 17358, null], [17358, 20216, null], [20216, 22175, null], [22175, 25687, null], [25687, 27175, null], [27175, 29049, null], [29049, 32359, null], [32359, 35353, null], [35353, 36663, null], [36663, 37315, null], [37315, 39708, null], [39708, 43877, null], [43877, 47475, null], [47475, 51546, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51546, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51546, null]], "pdf_page_numbers": [[0, 1257, 1], [1257, 4589, 2], [4589, 8953, 3], [8953, 12789, 4], [12789, 15041, 5], [15041, 17358, 6], [17358, 20216, 7], [20216, 22175, 8], [22175, 25687, 9], [25687, 27175, 10], [27175, 29049, 11], [29049, 32359, 12], [32359, 35353, 13], [35353, 36663, 14], [36663, 37315, 15], [37315, 39708, 16], [39708, 43877, 17], [43877, 47475, 18], [47475, 51546, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51546, 0.00505]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3a1a853ef3d41a2e6e4dc77265eac1270893bf18
[REMOVED]
{"Source-Url": "https://prosecco.gforge.inria.fr/personal/karthik/pubs/typechecking-higher-order-security-libraries-aplas10.pdf", "len_cl100k_base": 13551, "olmocr-version": "0.1.51", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 68240, "total-output-tokens": 15974, "length": "2e13", "weborganizer": {"__label__adult": 0.0004224777221679687, "__label__art_design": 0.0002765655517578125, "__label__crime_law": 0.0006055831909179688, "__label__education_jobs": 0.0003445148468017578, "__label__entertainment": 5.841255187988281e-05, "__label__fashion_beauty": 0.00015807151794433594, "__label__finance_business": 0.0002052783966064453, "__label__food_dining": 0.0004162788391113281, "__label__games": 0.0007734298706054688, "__label__hardware": 0.0009293556213378906, "__label__health": 0.0006031990051269531, "__label__history": 0.00022864341735839844, "__label__home_hobbies": 8.71419906616211e-05, "__label__industrial": 0.00043892860412597656, "__label__literature": 0.0002191066741943359, "__label__politics": 0.0003349781036376953, "__label__religion": 0.000522613525390625, "__label__science_tech": 0.0198211669921875, "__label__social_life": 7.218122482299805e-05, "__label__software": 0.0045166015625, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.0003540515899658203, "__label__transportation": 0.0005831718444824219, "__label__travel": 0.0002053976058959961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55500, 0.00986]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55500, 0.38043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55500, 0.81406]], "google_gemma-3-12b-it_contains_pii": [[0, 3058, false], [3058, 6829, null], [6829, 10195, null], [10195, 14153, null], [14153, 17752, null], [17752, 20068, null], [20068, 23857, null], [23857, 27250, null], [27250, 30568, null], [30568, 34056, null], [34056, 38554, null], [38554, 41677, null], [41677, 45156, null], [45156, 48915, null], [48915, 52363, null], [52363, 55500, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3058, true], [3058, 6829, null], [6829, 10195, null], [10195, 14153, null], [14153, 17752, null], [17752, 20068, null], [20068, 23857, null], [23857, 27250, null], [27250, 30568, null], [30568, 34056, null], [34056, 38554, null], [38554, 41677, null], [41677, 45156, null], [45156, 48915, null], [48915, 52363, null], [52363, 55500, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55500, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55500, null]], "pdf_page_numbers": [[0, 3058, 1], [3058, 6829, 2], [6829, 10195, 3], [10195, 14153, 4], [14153, 17752, 5], [17752, 20068, 6], [20068, 23857, 7], [23857, 27250, 8], [27250, 30568, 9], [30568, 34056, 10], [34056, 38554, 11], [38554, 41677, 12], [41677, 45156, 13], [45156, 48915, 14], [48915, 52363, 15], [52363, 55500, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55500, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
ae75d17881fbf02a23a93dfb95e90d9a5953a062
Data-Parallel Flattening by Expansion Martin Elsman University of Copenhagen Denmark mael@di.ku.dk Troels Henriksen University of Copenhagen Denmark athas@sigkill.dk Niels Gustav Westphal Serup University of Copenhagen Denmark ngws@metanohi.name Abstract We present a higher-order programmer-level technique for compiling particular kinds of irregular data-parallel problems to parallel hardware. The technique, which we have named “flattening-by-expansion” builds on a number of segmented data-parallel operations but is itself implemented as a higher-order generic function, which makes it useful for many irregular problems. Concretely, the implementation is given in Futhark and we demonstrate the usefulness of the functionality for a number of irregular problems and show that, in practice, the irregular problems are compiled to efficient parallel code that can be executed on GPUs. The technique is useful in any data-parallel language that provides a key set of primitives. CCS Concepts • Computing methodologies → Parallel programming languages; • Software and its engineering → Source code generation; Software performance. Keywords GPGPU programming, irregular nested parallelism, flattening, functional programming. ACM Reference Format: 1 Introduction While the development in computer architectures is increasingly providing improved parallel performance characteristics, the world’s programmers have difficulties making efficient use of the increased number of computational parallel units. For some domains, program abstractions make it possible for programmers to make use of libraries to release the potential of new hardware, while such an approach can be difficult to apply for other domains where high performance can be achieved only by parallelising particular domain-specific algorithms. Ideally, one could hope that existing code for such algorithms could be compiled unchanged to exploit the unreleased power of the new architectures. However, although such an approach works sometimes, in general, it does not. In such cases, programmers will often need to master parallel programming using low-level abstractions, such as those provided by CUDA and OpenCL, for which the learning curve is steep and the maintenance and development costs are high. Modern hardware favors regular data-parallel patterns and it is well understood how also nested regular patterns can be transformed into flat regular parallelism. However, nested irregular parallelism introduces problems that are not easily dealt with. In particular, the overhead of managing segment descriptors (i.e., data that describes the irregularity) and the additional overhead of applying segmented operations become problematic, and, often, the overhead becomes difficult for programmers to understand and reason about. In 1990, Blelloch introduced the functional programming language NESL [5], a high-level first-order parallel functional language, centered around the idea that nested parallelism (and also irregular parallelism) could be eliminated at compile time by a transformation called flattening. NESL was built around a single parallel array comprehension construct, which, as it turned out, could be used to implement a series of parallel constructs such as map, reduce, filter, and scan [4, 6–8]. This flattening approach to supporting irregular parallelism has later been refined for efficiency purposes [2, 10, 11, 24]. However, in general, it is difficult for the general flattening techniques to compete with hand-optimised flattened code. Futhark is a statically typed parallel functional array language, which supports well a notion of so-called moderate flattening, which allows for many cases of regular nested parallelism to be mapped to efficient flat parallelism [19]. Whereas regularly nested parallel map constructs can be translated trivially to flat parallelism, it is not immediately clear, in general, how to support efficiently non-regular nested map constructs. Futhark further implements a notion of incremental flattening [20], which generates multiple code versions in situations where the most optimal flattening strategy depends on the properties of the input data. For instance, the most optimal GPU code for dense matrix-multiplication depends highly on the sizes of the matrix dimensions. For dealing with irregular problems, Futhark requires the programmer to implement the flattening strategy by hand, which can be quite cumbersome. However, the programmer may choose different strategies for the implementation, which in some cases could involve padding (not work efficient but sometimes the most performance efficient technique in practice) or full flattening, which leads to work efficient implementations, which, however, are sometimes slow in practice.\footnote{A parallel algorithm is said to be work efficient if the work (i.e., number of operations) performed by the algorithm is of the same asymptotic complexity as the work performed by the best known sequential algorithm that solves the same problem.} In this paper, we present a design pattern for obtaining a full-flattened implementation of a certain class of irregular data-parallel problems. The design pattern is implemented in Futhark as a generic higher-order function expand, which has the following generic type: \[ \text{val expand } 'a \to 'b : (a \to i32) \to (a \to i32 \to b) \\ \rightarrow [\text{i}]a \to [\text{i}]b \] The function expands a source array, of type \([\text{i}a]\), into a target array, of type \([\text{i}b]\), given (1) a function that determines, for each source element, how many target elements it expands to and (2) a function that computes a particular target element based on a source element and the target element index associated with the source. As a simple example, the expression expand \((\langle x \to x \rangle) \langle*\rangle [2,3,1]\) returns the array \([0,2,0,3,6,0]\). Here \((\langle x \to x \rangle)\) denotes the identity function and \((\langle*\rangle)\) denotes integer multiplication. Semantically, expand \(f \ g \ xs\) performs the equivalent of \[ \text{flatten } (\text{map } (\langle x \to \text{map } (g \ x) \ (\text{iota } (f \ x))) \ xs) \] where \(\text{iota } n\) produces the array of integers from 0 up to \(n-1\), and flatten flattens an array of dimension \(n + 1\) into an array of dimension \(n\). Notice that the inner map operates on an array of \(f\ x\) elements, which is variant to the outermost map. Thus, the array passed to flatten is potentially irregular. The purpose of expand is to support this kind of irregular map nests in languages that do not directly support irregular parallelism. Given the usual definition of a fused implementation of flatten and map, called \(\text{flatMap}\), which has type \((a \to [\text{i}b]) \to [\text{i}a] \to [\text{i}]b\), the semantics of expand can also be given by the equation \[ \text{expand } f \ g = \text{flatMap } (\langle x \to \text{map } (g \ x) \ (\text{iota } (f \ x))) \] As an example of using the expand function to solve an irregular problem, consider the task of finding the points in a 2d plane that constitute an array of line segments, each given by its end points; see Figure 1 for an example of a grid of lines. The technique we will use to “draw lines” resembles the development by Blelloch \[5\] with the difference that it makes use of the expand function and that the underlying language implementation does not support irregular parallelism. Using the expand function, all we need is to provide (1) a function that determines for a given line, the number of points that make up the line and (2) a function that determines the \(n\)th point of a particular line, given the index \(n\). The code for such an approach is listed in Figure 2. The function \(\text{points_in_line}\) makes use of the observation that the number of points that make up the constituting set of points, for the line with end points \((x_1, y_1)\) and \((x_2, y_2)\), is one plus the maximum of \(|x_2 - x_1|\) and \(|y_2 - y_1|\), that is, one plus the maximum of the absolute values of the difference in \(x\)-coordinates and \(y\)-coordinates, respectively.\footnote{In Futhark, the dot-notation is overloaded and used for tuple- and record-projection, module access, and for locally opening a module inside the parentheses following the dot.} Using this observation, the function \(\text{get_point_in_line}\) can independently compute the \(i\)th point in the line by first calculating the proper direction and slope of the line (the two utility functions), relative to the line’s starting point. A conditional expression guides whether the \(x\)-dimension or the \(y\)-dimension is dominating. Using the flattening-by-expansion approach, we obtain a work efficient implementation of line drawing with work and span complexity being bounded by the work and span complexity of the underlying complexity properties of \(\text{scan}\), which the implementation of expand is founded on.\footnote{By work we refer to the total number of operations performed and by span we refer to the length of the longest chain of dependent parallel operations (also sometimes called depth).} In the concrete case, the two function arguments passed to expand are constant time operations, which means that, with \(n\) being the number of resulting points, the work complexity of the The algorithm is $O(n)$ and the span complexity of the algorithm is $O(\log n)$. The contributions of this paper are the following: 1. We present a generic approach to implementing a class of irregular (and even nested) data-parallel problems using a language-agnostic construct called expand. 2. We present an implementation of expand in Futhark, based on low-level segmented operations. 3. We demonstrate the usefulness of the flattening-by-expansion technique by showing that it can be used for flattening a variety of real-world problems. 4. We demonstrate that the implementation of expand leads to efficient implementations of problems in practice by comparing the performance of the implementations with hand-flattened code in some cases. 5. We discuss the limitations of the approach and give an example of an extension of expand, called expand_reduce, which can be used for implementing, for instance, sparse matrix-vector multiplication. The remainder of the paper is organised as follows. In the following sections, we present the details of the implementation of the expand function and the underlying segmented operations that the implementation builds on. In Section 4, we demonstrate how the expand function can be used for implementing nested irregular parallelism. In particular, we show how it can be used to expand an array of triangles or circles into an array of lines, which can then be further expanded into an array of points. In Section 5, we show how we can implement a work-efficient data-parallel implementation of Eratosthenes’ sieve and in Section 6, we show how the approach can be used to implement sparse matrix-vector multiplication. In Section 7, we show how the technique can be combined with some of Futhark’s more elaborate reduction constructs for controlling the depth of objects when drawing graphics. In Section 8, we describe related work and in Section 9, we describe future work and conclude. All code for the presented examples are available at https://github.com/diku-dk/futhark-array19. 2 A Toolbox of Segmented Operations Futhark features a number of low-level data-parallel constructs, including map, reduce, scan, and filter. Futhark is a sequentialising compiler in that in case some of the parallel constructs may turn up inside already parallel constructs, the compiler is permitted to sequentialise them if it judges this will result in the most efficient code. Futhark implements a number of fusion and flattening transformations that will seek to get as good a performance as possible while maintaining the semantics of the program. Futhark also features good abstraction mechanisms including higher-order functions [21], polymorphism, record types, and higher-order modules [14], which are all features that are present in the source language, but eliminated at compile time with the aim of obtaining efficient target code. 2.1 Segmented Scan A key operation needed for working with irregular problems is a segmented scan operation. Whereas specialised segmented scan implementations exist, in Futhark, a segmented scan operation can be defined using the ordinary scan function. Following Blelloch [5, Section 13.2], a generic segmented scan operation can be implemented as follows: ```futhark let segm_scan [n] 't (op: t -> t -> t) (ne: t) (flags: [n]bool) (as: [n]t) : [n]t = zip flags as |> scan (\(x_flag,x) (y_flag,y) -> (x_flag || y_flag, if x_flag then y else x 'op' y)) (false, ne) |> unzip ``` The first to notice about the Futhark implementation of segm_scan is that it is parametric in the type of elements and that Futhark also allows for specifying, using a so-called size-parameter, that the two array arguments should have the same size (i.e., \( n \)) and that the resulting array also has size \( n \). This simple support for certain kinds of dependent typing helps the compiler eliminate a number of dynamic checks while at the same time allowing the programmer to specify simple contractual properties of functions. As we shall see later, size parameters may also be referenced as ordinary variables of type \( \text{size} \) in the body of the function, which often makes it straightforward to refer to the size of an argument array. Given a binary associative operator \( \text{op} \) with neutral element \( \text{ne} \), the function computes the inclusive prefix scan of the segments of \( \text{as} \) specified by the \( \text{flags} \) array, where \( \text{true} \) starts a segment and \( \text{false} \) continues a segment. It is a straightforward exercise to prove that, given \( \text{op} \) is an associative operator with neutral element \( \text{ne} \), the function argument passed to \( \text{scan} \) is an associative operator and \( (\text{false}, \text{ne}) \) is its neutral element. Futhark implements arrays of records (or tuples) as records (or tuples) of arrays, which means that source language \( \text{zip} \) and \( \text{unzip} \) operations are compiled into the identity function, which has zero overhead. The higher-order infix "pipe operator" \( |> \) passes the result of its left-hand side into the function to the right. ### 2.2 Replicated Iota Based on the \( \text{segm_scan} \) function, we will now present an important utility function called \( \text{repl_iota} \). Given an array of natural numbers, specifying repetitions, the function returns an array of weakly increasing indices (starting from 0) and with each index repeated according to the entry in the repetition array. As an example, \( \text{repl_iota} \left[ 2, 3, 1, 1 \right] \) returns the array \( \left[ 0, 0, 1, 1, 2, 3 \right] \). The function is defined in terms of other parallel operations, including \( \text{scan} \), \( \text{map} \), \( \text{iota} \), \( \text{scatter} \), \( \text{reduce} \), \( \text{replicate} \), and, as mentioned, \( \text{segm_scan} \). Futhark’s \( \text{segm_scan} \) function is specified as follows: ``` val scatter t : *][t \rightarrow ]i32 \rightarrow ']t \rightarrow *][t ``` The first array argument is modified in place with an association list of updates specified by the following two arrays. The modified array is then transferred back to the caller of \( \text{scatter} \). Notice that the uniqueness typing (the \( * \)'s in the type), functions here as a simple ownership transfer mechanism. Here is the definition of \( \text{repl_iota} \): ``` let repl_iota [n] (reps:[n]i32) : [n]i32 = let s1 = scan (+) 0 reps let s2 = map (\i \rightarrow if \ i==0 then 0 else unsafe s1[i-1]) (iota n) let tmp = scatter (replicate (reduce (+) 0 reps) 0) s2 (iota n) let flags = map (>0) tmp in segm_scan (+) 0 flags tmp ``` Whereas the binding of \( s1 \) results in an inclusive scan of the repetition values, the binding of \( s2 \) results in an exclusive scan. Using the \( \text{tmp} \) array, which will be of size equal to the resulting array, the \( \text{flags} \) array will contain \( \text{true} \) values in positions where the indexes should be increased (and zeros elsewhere). The final segmented scan operation will return the desired result. Notice that in order to use this Futhark code with \text{futhark opencl}, we need to prefix the array indexing in line 4 with the \text{unsafe} keyword; the reason is that Futhark is not sufficiently smart to convince itself that the array indexing in line 4 is always within bounds. An example evaluation of a call to the function \( \text{repl_iota} \) is provided in the following table: <table> <thead> <tr> <th>Arg/result</th> </tr> </thead> <tbody> <tr> <td>( \text{reps} )</td> </tr> <tr> <td>[ 2 3 1 1 ]</td> </tr> <tr> <td>( \text{s1} )</td> </tr> <tr> <td>[ 2 5 6 7 ]</td> </tr> <tr> <td>( \text{s2} )</td> </tr> <tr> <td>[ 0 2 5 6 ]</td> </tr> <tr> <td>( \text{replicate} )</td> </tr> <tr> <td>[ 0 0 0 0 0 0 0 ]</td> </tr> <tr> <td>( \text{tmp} )</td> </tr> <tr> <td>[ 0 0 1 0 0 2 3 ]</td> </tr> <tr> <td>( \text{flags} )</td> </tr> <tr> <td>[ 0 0 1 0 0 1 1 ]</td> </tr> <tr> <td>( \text{segm_scan} )</td> </tr> <tr> <td>[ 0 0 1 1 1 2 3 ]</td> </tr> </tbody> </table> The resulting array is shown in the last line. ### 2.3 Segmented Iota Another useful utility function is the function \( \text{segm_iota} \), which, when given an array of flags (i.e., booleans), returns an array of catenated index sequences, each of which is reset according to the booleans in the array of flags. As an example, the expression ``` segm_iota [false,false,false,true,false,false] ``` returns the array \( \left[ 0,1,2,0,1,2 \right] \). The \( \text{segm_iota} \) function can be implemented with the use of a simple call to \( \text{segm_scan} \) followed by a call to \( \text{map} \): ``` let segm_iota [n] (flags:[n]bool) : [n]i32 = segm_scan (+) 0 flags (replicate n) |> map (\x \rightarrow x+1) ``` The map function call is necessary because \( \text{segm_scan} \) implements a segmented inclusive scan (contrary to a segmented exclusive scan for which each segment is initiated with an occurrence of the neutral element). Notice that the size-parameter \( n \) helps specifying that the size of the result array is of the same size as the given \( \text{flags} \) array. ### 3 The Expand Function Using the utility functions defined in the previous section, we can now define the \( \text{expand} \) function. The function is listed in Figure 3. The function makes use of the two utility functions Data-Parallel Flattening by Expansion \[ \text{let expand 'a 'b (sz: a \rightarrow i32) (get: a \rightarrow i32 \rightarrow b) =} \] \[ \text{(arr: []a) : []b =}\] \[ \text{let szs = map sz arr}\] \[ \text{let idxs = repl_iota szs}\] \[ \text{let iotas = segm_iota (map2 (!=) idxs idxs)}\] \[ \text{(rotate (-1) idxs))}\] \[ \text{in map2 (\(i, j \rightarrow) \text{get (unsafe arr[\(i\]) \(j\)}\)}\] \[ \text{idxs iotas}\] Figure 3. The definition of the expand function. repl_iota and segm_iota, which were both presented in Section 2. Assuming that sz and get are constant functions, the dominating function calls of expand are the segmented scan operations appearing inside repl_iota and segm_iota. These calls operate on data of the size \(M = \sum_{x \in a} sz x\), where \(a\) is the array argument passed to \(\text{expand}\). Under the assumptions of \(sz\) and \(get\) being constant functions, the work and span complexity of \(\text{expand}\) is therefore \(O(M)\) and \(O(\log M)\), respectively. The \(\text{expand}\) function can be defined in any parallel language that provides a suitable small set of primitives, namely map, segmented prefix sum (or simply a scan that supports an arbitrary operator, as in Futhark), scatter, and gather. Notice that support for nested parallelism is not required. 3.1 Algebraic Properties and Fusion The introduced \(\text{expand}\) function features a number of algebraic properties. We have already presented the semantics of \(\text{expand}\) in terms of iota, map, and flatten (and an alternative specification in terms of flatMap, which is also sometimes called concatMap). Another simple algebraic property, which can be used to convert a program into using \(\text{expand}\) (if that is desired), is the following: \[ \text{expand (const 1) f = map (\(\lambda x \rightarrow f x 0\))}\] Regular uses of \(\text{expand}\) can be converted into a regular map-nest using the following algebraic property: \[ \text{expand (const n) f xs =} \] \[ \text{flatten \(<\rightarrow\) map (\(\lambda i \rightarrow) \text{map (\(\lambda x \rightarrow f x i\)}\)}\] \[ \text{xs)}\] \[ \text{(iota n)}\] Here the infix Futhark function \(<\rightarrow\) denotes function composition. Futhark does not currently recognise such patterns as \(\text{expand}\) as a user defined function. Futhark will, however, happily inline the \(\text{const}\) function inside the \(\text{expand}\) function at every use of \(\text{expand}\). This inlining could potentially give rise to optimisations, which, however, are not currently exploited. A proper fusion scheme is essential for any language that targets GPUs [10, 16, 25, 29]. Futhark implements a number of fusion strategies but is also careful not to introduce duplication of work [18]. The \(\text{expand}\) function fuses with map and filter as follows: \[ \text{map f \(<\rightarrow\) expand sz get =} \] \[ \text{expand sz (\(\lambda x \rightarrow f \(<\rightarrow\) get x\))} \] \[ \text{expand sz get \(<\rightarrow\) filter p =} \] \[ \text{expand (\(\lambda x \rightarrow if p x \text{ then} sz x \text{ else} \emptyset\)} \] Because Futhark supports well map-map fusion [18] and because applications of \(\text{expand}\) are inlined by Futhark, essentially, \(\text{expand}\) fuses with map. Fusing \(\text{expand}\) with filter, however, is not easily supported, however, unless the Futhark fusion engine gets to learn about the intrinsic fusion properties of \(\text{expand}\). 4 Nested Irregular Parallelism In this section, we demonstrate that the flattening-by-expansion technique can also be applied in a nested setting with flattening happening at multiple levels. 4.1 Drawing Triangles An example of an algorithm worthy of flattening is triangle rasterisation, that is, an algorithm that in parallel computes the points that constitute a set of triangles. The algorithm that we present here is based on the assumption that we already have a function for drawing multiple horizontal lines in parallel. Luckily, we have already seen how we can define such a function! The algorithm for drawing triangles is based on the property that any triangle can be split into an upper triangle with a horizontal baseline and a lower triangle with a horizontal ceiling. Just as the algorithm for drawing lines makes use of the \(\text{expand}\) function defined earlier, so will the flattened algorithm for drawing triangles. A triangle is defined by the three points representing the corners of the triangle: \[ \text{type triangle = (point, point, point)}\] We shall make the assumption that the three points that define the triangle have already been sorted according to the \(y\)-axis. Thus, we can assume that the first point is the top point, the third point is the lowest point, and the second point is the middle point (according to the \(y\)-axis). The first function we need to pass to the \(\text{expand}\) function is a function that determines the number of horizontal lines in the triangle: \[ \text{let lines_in_triangle ((p,_,r):triangle) : i32 =} \] \[ r.2 - p.2 + 1\] The second function we need to pass to the expand function is somewhat more involved. We first define a function \( dxdy \), which computes the inverse slope of a line between two points:\(^4\) \[ \text{let } \text{dxdy}(a:\text{point }) (b:\text{point }): \text{f32 } = \\ \text{let } \text{dx} = b.1 - a.1 \\ \text{let } \text{dy} = b.2 - a.2 \\ \text{if } \text{dy} == 0 \text{ then } 0 \\ \text{else } \text{r32 dx / r32 dy} \] We can now define the function that, given a triangle and the horizontal line number in the triangle (counted from the top), returns the corresponding line: \[ \text{let } \text{get_line_in_triangle} ((p,q,r): \text{triangle }) (i: \text{i32 }) = \\ \text{let } y = p.2 + i \\ \text{if } i <= q.2 - p.2 \text{ then } -- \text{ upper half} \\ \text{let } \text{sl1} = \text{dxdy} p q \\ \text{let } \text{sl2} = \text{dxdy} p r \\ \text{let } x1 = p.1 + \text{t32}(\text{f32 . round (sl1 * r32 i))} \\ \text{let } x2 = p.1 + \text{t32}(\text{f32 . round (sl2 * r32 i))} \\ \text{in } ((x1,y),(x2,y)) \\ \text{else } -- \text{ lower half} \\ \text{let } \text{sl1} = \text{dxdy} r p \\ \text{let } \text{sl2} = \text{dxdy} r q \\ \text{let } \text{dy} = (r.2 - p.2) - i \\ \text{let } x1 = r.1 - \text{t32}(\text{f32 . round (sl1 * r32 dy))} \\ \text{let } x2 = r.1 - \text{t32}(\text{f32 . round (sl2 * r32 dy))} \\ \text{in } ((x1,y),(x2,y)) \] The function distinguishes between whether the line to compute points for resides in the upper or the lower subtriangle. Finally, we can define a parallel, work-efficient function that converts a number of triangles into lines: \[ \text{let } \text{lines_of_triangles} (xs:\)[triangle ] : []\text{line } = \\ \text{expand lines_in_triangle get_line_in_triangle} \\ \text{(map normalise xs)} \] In the above function, the function normalise sorts (using an unrolled bubble sort) the corner points in each triangle according to the \( y \)-axis: \[ \text{let normalise } ((p,q,r): \text{triangle }) : \text{triangle } = \\ \text{let } \text{bubble } (a:\text{point }) (b:\text{point }) = \\ \text{if } b.2 <= a.2 \text{ then } (b,a) \text{ else } (a,b) \\ \text{let } (p,q) = \text{bubble } p q \\ \text{let } (q,r) = \text{bubble } q r \\ \text{let } (p,q) = \text{bubble } p q \\ \text{in } (p,q,r) \] \(^4\)For converting floats to integers, we make use of the function \( \text{r32 : i32 } -> \text{f32} \). Figure 4. A grid of points generated by first, in parallel, generating the lines that make up a number of triangles, and then, also in parallel, generating the points that make up the lines. The entire algorithm is work-efficient due to flattening and the use of the expand function. Figure 4 shows the code in action when the function lines_of_triangles is called with an array of three triangles, defined as follows: \[ \text{[((5,10), (2,28) , (18,20)) ,} \\ \text{((42,6) , (58,10) , (25,22)) ,} \\ \text{((8,3) , (15,15) , (35,7)) ]} \] The lines generated by the function lines_of_triangles is further processed using the points_of_lines function, which generates the points that are then shown in a grid of height 30 and width 62. The technique demonstrated for triangles can easily be adapted to work for solid circles and ellipses. The technique can also be adapted to work for drawing the circumference of regular polygons and circles. 5 Flattening the Sieve of Eratosthenes A sometimes useful strategy for obtaining a parallel algorithm is to use the concept of contraction, the general algorithmic trick of solving a particular problem by first making a contraction step, which simplifies the problem size, and then repeating the contraction algorithm until a final result is reached \([27]\). A variant of a contraction algorithm is an algorithm that first solves a smaller problem, recursively, and then uses this result to provide a solution to the larger problem. One such algorithm is a version of the Sieve of Eratosthenes that, to find the primes smaller than some \( n \), first calculates the primes smaller than \( \sqrt{n} \). It then uses this intermediate result for sieving away the integers in the range \( \sqrt{n} \) up to \( n \) that are multiples of the primes smaller than \( \sqrt{n} \). Unfortunately, Futhark does not presently support recursion, thus, one needs to use a loop construct instead to implement the sieve. A Futhark program calculating an array containing the set of primes below some number \( n \), is shown in Figure 5. Although the present algorithm is quite efficient in practice, the non-flattened version takes 171ms on average, the non-flattened version takes 171ms on average, and the flattened version, which uses the `expand` function, takes 11.3ms on average. We emphasise here that we have arranged that the versions are comparable in the sense that they all compute the sieves from scratch. For these and later measurements, we do not measure the time taken to move input data to the GPU, or results back to the CPU. The operations in this paper tend to be building blocks in larger Futhark programs, not full applications, and it is our experience that in practice, their data is already located on the GPU, and their results also need further processing on the GPU. ### 6 Sparse Matrix-Vector Multiplication Numerous possible representations of sparse matrices exist. Here we demonstrate the use of the flattening-by-expansion technique for implementing a version of sparse-matrix vector multiplication based on a compressed sparse row implementation of sparse matrices. In Futhark, the type of a compressed sparse row representation of a matrix can be defined as follows: ```futhark type csr 't = (row_off: []i32, col_idx: []i32, vals: []t) ``` The type `csr` is parameterised over the type of the underlying matrix values. Given a sparse matrix of size \( N \times M \) with \( \text{NNZ} \) non-zero values, the size of the `row_off` array is \( N + 1 \) and the size of each of the `col_idx` and `vals` arrays is \( \text{NNZ} \). The compressed sparse row representation favours that each row can be processed in parallel. However, because each row contains a different number of non-zero elements, the problem becomes irregular. We shall apply an extended version of the expand function, which has the following type: ```futhark let primes (n:i32) : []i32 = (.1) <| loop (acc,c) = ([],2) while c < n+1 do let c2 = if c < t32(f32.sqrt(r32(n+1))) then c*c else n+1 let is = map (+c) (iota(c2-c)) let fs = map \( i \to \) let xs = map (\p \to if i%p==0 then 1 else 0) acc in reduce (+) 0 xs do -- apply the sieve let new = filter (\i \to unsafe fs[i-c]) is in (acc ++ new, c2) ``` Figure 6. Flattened version of the Sieve of Eratosthenes using flattening-by-expansion. last entry shows numbers for a dense matrix-vector multiplication, and we see that when the density gets higher than approximately 10 percent, dense matrix-vector multiplication outperforms the sparse version. This is because of lower constant factors; for the dense matrix-vector multiplication, the Futhark compiler generates a transposition to ensure coalesced memory access, followed by a call to a single GPU kernel that performs the actual computation. In contrast, the sparse operation requires several expensive scans. 6.1 Sparse-Matrix-Matrix Multiplication It turns out that it is straightforward to implement sparse matrix-matrix multiplication on top of the functionality already developed. Here is a function that implements multiplication of a sparse matrix with a dense matrix: \[ \text{let } \text{smmm} \ [n] \ (\text{sm}: \text{csr} \ f32) \ (m: [] [n] f32) : [n] [] f32 = \\ \text{map (smvm sm)} (\text{transpose m}) \] It is more difficult to implement matrix multiplication between two sparse matrices, for which efficient implementations require some degree of binary searching. 7 Managing Graphics Depth The triangle drawing technique presented in Section 4.1 works only when all triangles have the same color. In essence when two triangles overlap, the lines, and eventually the points, generated with \text{lines_of_triangles} and \text{points_of_lines} can be used in concert with \text{scatter} to draw the triangles on a canvas. However, \text{scatter} does not provide any guarantees about the effect of writing multiple values to the same entry, except when the values are identical. To deal with this problem, Futhark features a function called \text{reduce_by_index}, which can be used instead of \text{scatter} to control the effect of multiple writes to the same entry. Here is the type of the function: \[ \text{val } \text{reduce_by_index} \ 'a : * [ ] a \rightarrow (a \rightarrow a \rightarrow a) \rightarrow a \\ \rightarrow [ ] i32 \rightarrow [ ] a \rightarrow * [ ] a \] In addition to the array arguments, also taken by \text{scatter}, \text{reduce_by_index} also takes an (assumed to be) associative and commutative function, operating on array elements, and its neutral element. This function is used to combine the old value in the array with the new one that is being written. We shall not discuss the implementation of \text{reduce_by_index} here but just mention that its implementation is based on the techniques used for implementing histogram computations on GPUs [23, 26]. In fact, \text{reduce_by_index} can be viewed as a generalised function for computing histograms. Using a 3d representation of colored points, lines, and triangles, we can now make use of \text{reduce_by_index} to control which parts of triangles are shown. We do this by pairing each pixel to be written with its distance from the camera, We do not claim competitive rendering performance; merely with a frame rate of 15 frames per second (on a 2880 MacBook Pro has a screen. lacks critical optimisations such as fusion. parallelism to different levels of parallelism on the GPU, but CuNesl [34], which aims at mapping different levels of nested expand tions in the terrain. Other promising attempts at compiling NESL to GPUs in- to implement efficiently in practice, particularly on GPUs [3]. However, compiler-based flattening has proven challenging expand tions in the terrain. Other promising attempts at compiling NESL to GPUs in- to implement efficiently in practice, particularly on GPUs [3]. However, compiler-based flattening has proven challenging expand tions in the terrain. Other promising attempts at compiling NESL to GPUs in- to implement efficiently in practice, particularly on GPUs [3]. However, compiler-based flattening has proven challenging expand and providing an operator to reduce_by_index that picks the pixel closest to the camera. This technique is exactly the classic technique of z-buffering. Figure 8 shows a scene from a game implemented in Futhark, where the landscape is drawn using a set of triangles, colored differently based on the vertical position in the terrain. Using the AMD Radeon Pro 460 GPU on a MacBook Pro, 500,000 triangles can be drawn using flattening-by-expansion with a frame rate of 15 frames per second (on a 2880 × 1800 display). These numbers include the time for computing the triangles, based on the camera’s point-of-view and the terrain information, and for copying the computed images back and forth between the GPU and CPU (Futhark cannot presently store images directly in image buffers). Notice that this implementation is of course much less efficient than if we used the specialised graphics hardware on the GPU. We do not claim competitive rendering performance; merely that expand allows us to express the algorithm in a natural and parallel way, and still obtain decent performance. 8 Related Work Much related work has been carried out in the area of supporting nested parallelism, including the seminal work on flattening of nested parallelism in NESL [5, 6], which was extended to operate on a richer set of values in Data-parallel Haskell [9], and the work on data-only flattening [34]. These approaches tend to focus on maximising expressed parallelism, and negate the need for a function such as expand. However, compiler-based flattening has proven challenging to implement efficiently in practice, particularly on GPUs [3]. Other promising attempts at compiling NESL to GPUs include Nessie [28], which is still under development, and CuNesl [34], which aims at mapping different levels of nested parallelism to different levels of parallelism on the GPU, but lacks critical optimisations such as fusion. More recent data-parallel languages include Obsidian [12, 30, 31] and Accelerate [10], which are both embedded in Haskell, and do not feature arbitrary nested parallelism. Accelerate in particular can easily support manually flattened programming in the expand style, as segmented scans and scatter operations are readily available. Accelerate also supports certain forms of irregular arrays by supporting a notion of irregular stream scheduling [13]. Other attempts at supporting nested (and even irregular) parallelism on GPUs include more dynamic approaches, such as dynamic thread block launching [33] and dynamic parallelism, which are extensions to the GPU execution model involving runtime and micro architecture changes. These approaches to supporting irregular parallelism does, however, often come with a significant overhead [32]. Other dynamic approaches include a partial flattening approach, implemented using thread stealing, which also introduce a significant overhead [22]. 9 Conclusions and Future Work In this paper, we have demonstrated a programming technique that allows for convenient manual flattening of certain irregular nested parallel constructs, even if the target language does not support nested parallelism at all. The resulting code is asymptotically as efficient as that which would have been generated with full NESL-style flattening, and allows the programmer more control and a “pay-as-you-go strategy” to flattening. Further, the real-world performance is sufficient to carry out real-time graphics rendering. There are a number of possibilities for future work. First, some overhead can perhaps be avoided in situations where flattened data is immediately scattered into a target array. To avoid the resulting double copying overhead, one may consider defining a function that instead of returning a target array takes as argument a destination array, which is then returned to the caller with modified content. Second, there are a number of irregular nested parallel algorithms that may benefit from the use of the expand function. Such algorithms include algorithms for graph traversals [17] and irregular parallel financial applications [1]. Other possible future work include investigating whether the technique can be extended in such a way that it can be used to ease the flattening of more involved algorithms, such as quick-sort [15] or multiplication of two sparse matrices. Acknowledgments This research has been partially supported by the Independent Research Fund Denmark grant under the research project FUTHARK: Functional Technology for High-performance Architectures. Thanks to the reviewers for their careful comments and to Cosmin E. Oancea for many fruitful discussions about this work. Figure 8. A landscape scene from a game implemented in Futhark. The terrain is displayed using a large number of triangles, which are colored according to their vertical positions in the terrain. References
{"Source-Url": "http://hiperfit.dk/pdf/array19.pdf", "len_cl100k_base": 9564, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40426, "total-output-tokens": 13597, "length": "2e13", "weborganizer": {"__label__adult": 0.0004591941833496094, "__label__art_design": 0.0006589889526367188, "__label__crime_law": 0.0003685951232910156, "__label__education_jobs": 0.0006589889526367188, "__label__entertainment": 0.00011146068572998048, "__label__fashion_beauty": 0.00020647048950195312, "__label__finance_business": 0.00021505355834960935, "__label__food_dining": 0.0004417896270751953, "__label__games": 0.0011463165283203125, "__label__hardware": 0.003185272216796875, "__label__health": 0.0005850791931152344, "__label__history": 0.0004584789276123047, "__label__home_hobbies": 0.00014853477478027344, "__label__industrial": 0.0007205009460449219, "__label__literature": 0.0002970695495605469, "__label__politics": 0.00033736228942871094, "__label__religion": 0.0007915496826171875, "__label__science_tech": 0.08502197265625, "__label__social_life": 7.528066635131836e-05, "__label__software": 0.006206512451171875, "__label__software_dev": 0.896484375, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.0008749961853027344, "__label__travel": 0.00028443336486816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48531, 0.03143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48531, 0.63916]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48531, 0.83637]], "google_gemma-3-12b-it_contains_pii": [[0, 4438, false], [4438, 9764, null], [9764, 13493, null], [13493, 19108, null], [19108, 24206, null], [24206, 28681, null], [28681, 31071, null], [31071, 33947, null], [33947, 39747, null], [39747, 47711, null], [47711, 48531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4438, true], [4438, 9764, null], [9764, 13493, null], [13493, 19108, null], [19108, 24206, null], [24206, 28681, null], [28681, 31071, null], [31071, 33947, null], [33947, 39747, null], [39747, 47711, null], [47711, 48531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48531, null]], "pdf_page_numbers": [[0, 4438, 1], [4438, 9764, 2], [9764, 13493, 3], [13493, 19108, 4], [19108, 24206, 5], [24206, 28681, 6], [28681, 31071, 7], [31071, 33947, 8], [33947, 39747, 9], [39747, 47711, 10], [47711, 48531, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48531, 0.05]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
bae9d0b43c27b461ecceee19f02e8bdce70861a7
Towards Collaborative Web-Based Impact Assessment Clemens Heidinger Universität Karlsruhe (TH), Germany heidinger@ipd.uka.de Erik Buchmann Universität Karlsruhe (TH), Germany buchmann@ipd.uka.de Klemens Böhm Universität Karlsruhe (TH), Germany boehm@ipd.uka.de ABSTRACT Impact assessment (IA) is a key method for the legislator to evaluate policies, norms or regulations currently under development. Experts use IA to gather and analyze input from many individuals to obtain clear problem statements, estimations regarding policies etc., and use this information to compare policy alternatives. Currently, the opinions, expertise etc. gathered for IA need to be structured by hand. Thus, the analysis steps of IA are time-consuming, and IA does not scale with the number of persons involved. In this paper, we introduce a collaborative approach for IA. Based on a Web 2.0 architecture, we let a community of individuals derive the potential, downsides and design alternatives of policies collaboratively. An important characteristic of our approach is that it guides individuals through the process of creating structured input. Our approach is fully implemented, and we have evaluated it together with legal experts using the structured-case method. While our evaluation reveals that the acceptance of web-based IA strongly depends on the user interface, it also acknowledges that our approach can be an important tool for future IA. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous General Terms Design, Legal Aspects Keywords Impact Assessment, Web 2.0 Application 1. INTRODUCTION Impact assessment (IA) [9] is important to evaluate anticipated effects, unintended side effects and possible alternatives of norms, policies, regulations or laws. More precisely, IA is a quality-based approach where independent professionals as well as citizens concerned are asked for expertise and comments. Experts analyze this input in order to estimate the appropriateness of legal norms, to balance their positive and negative effects on society and to compare alternative policy options [9]. Thus, IA is a key instrument for the legislator to develop elaborate laws. For this reason, IA is widely used in the EU. The Commission of the European Communities (EC) has declared in the “Better Regulations” package that any major policy proposal must be evaluated using IA. In 2008, the EC has commissioned 135 IAs [10] on legal norms regarding health care, consumer rights, nuclear safety, etc. Further, the EC has established an independent IA board to supervise and evaluate IAs performed by other EC departments. In 2007, the board has examined 102 IAs [8]. This number has risen to 135 in 2008 [10], i.e., the EC strives to examine all IAs that have been commissioned. For any IA, the topics behind the policies evaluated are complex. Thus, in order to obtain a full picture of the effects of the policy and of its alternatives, it is of utmost importance to consider the opinions and expertise of many professionals of the policy domain and of many citizens affected by the policies [9]. However, integrating many participants in the IA process is challenging. This is because different kinds of input have to be gathered from people with different knowledge and education in a way that allows for a structured analysis. We have designed and implemented a Web 2.0 approach that lets a community of users participate in the IA process, without restricting the kind of information gathered. Our approach adopts topic maps to (1) collect and browse IA-relevant information, to (2) create references between related pieces of information, and to (3) support IA analyses. Our contributions described in this article are as follows: - We introduce the roles Expert and Participant in the IA context, describe their tasks in the IA process and study their needs regarding a Web 2.0-supported IA tool. - We develop a topic map which models the information collected and analyzed during the IA process. It can be extended by further concepts if necessary for a particular IA. - We describe the design and implementation of a prototypical Web 2.0 application that allows a community of users to participate in IA, and that helps experts to analyze the information obtained. - We evaluate our approach with the structured-case method. The evaluation confirms that our approach is useful for IA and provides suggestions for future research. 1http://ec.europa.eu/governance/better_regulation/ 3http://ec.europa.eu/governance/impact/iab_en.htm Paper outline: The next section features an overview of IA. Section 3 introduces our approach. Section 4 describes our evaluation. Section 5 concludes. 2. IMPACT ASSESSMENT In this section we describe the fundamentals of IA, together with related work on IA. We then compile our requirements regarding system support for the IA process. IA is a qualitative method to gather evidence on the effectiveness and possible impacts of policy proposals. Based on such evidence, IA derives quality measures for policy alternatives, e.g., the expected degree of goal accomplishment, possible side effects, the number of citizens affected, appropriateness, etc. Thus, IA provides the means to systematically - gather information from a wide range of independent stakeholders, - provide a comprehensive analysis of impacts on sectors like society, environment or economy, - identify the policy that meets the objectives with the fewest side effects, - explain why a policy is necessary and appropriate, or, conversely, show why a policy should not be implemented. Since 2002, the EC has completed over 400 IAs – alone 135 in 2008 [10]. The area of topics covered is broad[4]. A directive on the safety of toys has been analyzed as well as a regulation on motor-vehicle emissions. In 2007 three policy initiatives were stopped because IAs concluded that these policies were inappropriate [7]. Countries outside the EU use IA as well. However, as IA is a key method for the legislation process of the EU, the EC has spent the most effort in refining the IA process in the last years, including the establishment of an independent advisory board and mandating external evaluations [11]. For this reason, the definitions of the EC [9] are state of the art in IA and are used as a basis for our work. 2.1 IA Roles and IA Process From the definition of the EC [9] we have extracted two distinct roles in the IA context: Experts carry out the IA process, i.e., they structure the information gathered and use it to assess different policy options. For example, experts can be politicians who want to make informed decisions. Experts work on IA documents like summary reports, which are compiled from information that is gathered from participants. Participants are professionals of the policy domain or independent stakeholders affected by the policy under investigation. Experts can (but do not have to) be participants. IA consists of six steps [9]: 1. Identify the problem. Develop a clear problem statement. The basis of the problem statement is an intuitive problem description, e.g., a petition to the parliament. 2. Define the objectives. Specify the objectives needed to solve the problem. Based on this specification, investigate if the objectives are desirable. Methods to obtain the objectives include expert workshops and pilot studies. 3. Develop main policy options. Develop a number of concrete policy options from the preceding analysis of the problems and objectives. Furthermore, identify and eliminate inferior policy options when it is obvious that other options achieve the same or better results with fewer negative side effects. 4. Analyze the impacts. Estimate the impacts and side effects of main policy options regarding (1) sectors of general importance, e.g., impacts on social life, economics, or administrative overhead, and (2) sectors affiliated with the policy domain. 5. Compare the policy options. Compare the information obtained in the previous steps in order to identify policies which achieve the objectives envisioned without inappropriate side effects. To do so, the policy options must be compared how well they reach the objectives, and positive and negative impacts must be weighed against each other. 6. Policy monitoring and evaluation. The evaluation finds out if the policy proposal provides the intended effects. One evaluation option for the legislator is to pass an experimental law, i.e., a law with a revision clause requiring a re-evaluation after a defined period of time. Only if the law passes the re-evaluation, it will be made permanent. The main criterion for the applicability of a certain policy option is the principle of proportionality[5]. In order to prove or disprove that a policy option complies with this principle, Steps 3 to 5 of the IA process collect and evaluate empirical facts[6]. Example 1: In order to provide an intuitive example, we briefly outline the IA [5] for an EU directive to protect the soil [6]. The means to perform this IA have included stakeholder workshops where 400 participants were organized in five working groups, a public Internet survey, and a public consultation with 1,206 citizens, 377 soil experts and 287 organizations from 25 countries [6]. 1. Problem. The soil is threatened by erosion, decline in biodiversity, sealing, pollution, landslides, contaminated water, etc., which do not stop at national borders. However, there is no EU-wide regulation. 2. Objectives. First, determine risks that arise from human activities and natural conditions, and identify areas at risk. Second, set for supranational risk-reduction targets and measures to achieve them. Leave aside soil sealing, as this is already regulated elsewhere. 3. Options. Policy options are: (1) Encourage EU members to protect soil by non-binding guidelines. (2) Specify all objectives and means for each soil threat at EU [6]The German Federal Constitutional Court has emphasized the need for empirical facts several times [2, 3]. The same holds for other EU members. level. (3) Define mandatory objectives, but leave it to the member states to transfer them to national law. The first two options were rejected in this IA step. The alternatives of the third option include the means to identify areas at risk, measure the risks, develop national protection strategies, establish soil status reports, etc. 4. Analysis. The analysis step identifies the costs, side effects and impacts of each option. To provide one example, the alternatives to find areas at risk are: (a) Use existing monitoring schemes only. (b) Monitor any area in an EU-wide 10x10 km grid. (c) Monitor progress in the identification of risk areas. 5. Comparison. This IA step compares the alternatives for each policy option in isolation. When considering the alternatives to find areas at risk, Option (a) does not incur additional costs, but is less effective. Option (b) results in 97 million EUR over 50 years, while the costs for Option (c) are estimated to be significantly lower while being equally effective. Thus, the third alternative is optimal. 6. Monitoring. The final step of the IA for the soil protection policy has identified indicators for the objectives. Reporting obligations for specific measures are intended to provide an effective evaluation. For example, one obligation is that the progress of risk-area identification is monitored and evaluated. The result of the IA process is a policy that has been carefully evaluated and refined. As a rule of thumb, the more people are asked for their contributions, the smaller is the chance to miss important impact factors. However, the sheer amount of information provided makes it difficult to involve a large community of users. In order to overcome this limitation, we strive for a software application that helps to gather, structure and analyze IA information. 2.2 Requirements for an IA Application The requirements for an application that supports IA originate both from the specification of the IA process described so far and the need to involve many members of society who do not have in-depth knowledge on IA. We have identified six requirements: R1: Roles “participant” and “expert” Because experts and participants have different tasks and skills, an important requirement is to provide them with mechanisms that are tailored to their role. R2: Structured knowledge representation A structured representation of any information gathered during the IA process is required to organize and analyze the contributions of a large number of participants. R3: Analyses In order to support the information needs of experts and participants during the analysis steps, a set of pre-defined queries as well as the means to browse and traverse the knowledge representation are required. R4: Collective intelligence The participants might provide contributions that are misleading, contradicting, or biased by personal attitudes. Thus, editing and voting mechanisms are required to let the community sanitize the contributions. R5: User guidance As the participants are not familiar with the complex IA process, the software has to provide intuitive and user-friendly interfaces to guide them through the IA process. R6: Extensibility Since the IA process depends on the topics under investigation, the software needs to be extensible. Furthermore, experts must be able to adapt the knowledge representation without having to recompile the software. We have decided to develop a web application based on these requirements for three reasons: (1) The Internet provides the means to address a large community of users, (2) web applications can be used through standard software installed on any PC with Internet access, and (3) the Web 2.0 technologies that are currently available ease the implementation of such an application. 3. A WEB APPLICATION FOR IA This section describes our collaborative Web 2.0 application that assists in the IA process. In order to comply with the requirements identified in the last section, we will address the following building blocks: - A data structure that represents the knowledge collected during the IA process (Requirement R2). - The functionality available to role Participant (Requirement R1). This includes support (a software-“wizard”) to guide participants through the process of collecting relevant information (Requirement R5) and an editing/voting system (Requirement R4). - The functionality available to role Expert (Requirement R1), i.e., the functions for the participants together with the means to search, browse, structure and evaluate the information gathered, and to extend the knowledge representation (Requirement R6). - A set of pre-defined queries that help to execute general IA analysis steps (Requirement R3). In order to realize these building blocks, we have analyzed IAs of the past, documentation of IA and court decisions, together with a German law firm specialized on environmental law. We cannot guarantee that we have not overlooked aspects important for future IA. However, experts can adapt our representation, and extending a web application is relatively simple (cf. Requirement R6). Furthermore, it is part of our evaluation to identify functionality that might be missing. Regarding architecture and implementation, our prototype follows a standard client-server architecture with a database backend and is using AJAX for a responsive user interface [14]. 3.1 Knowledge Representation The six analysis steps for IA imply that it is good practice if a participant, when proposing a new measure, provides the following information: (1) describe the measure, (2) link the measure with the objective it is intended for, (3) state how measure and objective comply with the principle of proportionality, and (4) point to experiences like studies, facts or anecdotic evidence. Thus, a knowledge representation for IA must systematically manage policies, objectives, measures, problems, impacts, juridical concerns, etc. (R2). Furthermore, the representation has to model relationships between those aspects, e.g., which measures are planned to reach a certain objective, or which objectives have been specified for a particular policy option. Finally, the representation must allow structured access to this information (R3). It must be possible to issue queries to find out how policy aspects are linked to certain impacts, and how a policy option affects another one. We have realized our knowledge representation on the basis of topic maps. Topic maps [15] are an ISO standard [12] for ontologies that humans can easily understand, browse and navigate. The building blocks of topic maps are topics, associations and occurrences. All three building blocks can be typed and have instances. Topics describe an object or aspect that exists in the real world. If a topic is typed, it is called topic type and describes common characteristics of an object set. An instance of a topic type is called instance topic and refers to a certain object. Associations establish links between topics. Association types describe one certain relationship between topic types. Occurrences link to external resources outside of the topic map. We map IA information to a topic map as follows: Topic types and association types provide a structured classification for statements and contributions. These types are domain-independent, i.e., they are the basis for any IA. When participants provide domain-specific information for a particular IA, they create instances of topic types, association types and occurrence types. Figure 1 visualizes our model. The model has been obtained from the consistent IA argumentation structure of the EC [9], which implies the model components. Rectangles represent the topic types, and association types are boxes with round corners. Edges connect the topic types linked by a particular association type. As each topic type and association type can have instances and occurrences, they are not shown in the figure for better visibility. If future IAs required further types, experts can extend or modify the model (Requirement R6, cf. Subsection 3.3). Our model includes the topic types Norm, Level, Objective, Measure, Impact and Experience. A Norm represents a policy proposal, law, regulation etc. The Level describes at which administrative layer a norm will be implemented (European, national, communal, etc.). Objectives represent the goals of a norm. A Measure is any action that might help to realize these objectives. The Impact stands for intended and unintended effects of a measure. Finally, Experiences provide evidence that a certain measure helps to reach a particular objective. Bi- and trivalent association types connect the topic types specified. The association type “disproves” links an experience, a measure and an objective such that, according to the experience, the measure does not help to reach the objective. Association type “proves” in turn stands for a positive proof. Two experiences can be contradicting (“contradicts”). The principle of proportionality requires that a measure “facilitates”, “is suitable for”, “is necessary for”, and “is adequate for” an objective. The objective “is specified in” a norm. A measure often “has” an impact, and it is possible that it “is encountered with” another measure. Sometimes an impact “leads to” another impact. A measure “is specified in” a norm. Two norms may be connected or contradicting (“is connected to”, “is in conflict with”). Finally, a norm “is realized on” an administrative level. **Example 2:** To provide the intuition behind our approach, we exemplarily show how a detail of Example 1 is represented in our context. The policy proposal for soil protection [6] specifies an obligation to create a status report of the soil when selling a site. With our representation, this corresponds to an instance of “Measure”: <table> <thead> <tr> <th>Instance name</th> <th>Description</th> <th>Topic type</th> </tr> </thead> <tbody> <tr> <td>Status report</td> <td>Either the seller or the buyer of a potentially contaminated site provides the transaction partner with a soil status report.</td> <td>Measure</td> </tr> </tbody> </table> **Figure 2:** An instance of “Measure”. The goals of this measure are represented as instance topics of type “Objective”. One of the objectives mentioned is to satisfy public interest, which is mapped to our knowledge representation as follows: <table> <thead> <tr> <th>Instance name</th> <th>Description</th> <th>Topic type</th> </tr> </thead> <tbody> <tr> <td>Public interest</td> <td>The obligation to transmit information on the soil status improves and accelerates the identification of contaminated sites.</td> <td>Objective</td> </tr> </tbody> </table> **Figure 3:** An instance of “Objective”. The topic instances “Status report” and “Public interest” are linked with an instance of the association type “facilitates”. All other aspects mentioned in the proposal for soil protection can be mapped to our representation in a similar way. ### 3.2 Support for Participants Now we specify the functionality needed by the role participant. Participants have to provide new arguments in favor of or against certain policy options and shall be able to link existing arguments to others. When looking at our knowledge representation, this corresponds to creating new topic instances, providing relationships (association instances) between them and linking external documents as instances of occurrence types. Furthermore, participants must be able to browse the topic map in order to review, extend and correct existing instances. Thus, our web application has to provide the following set of methods for participants: - Browse the topic map. - Search for topics and associations. - Create and edit instances of topic types, association types and occurrence types. According to Requirement R5, the participants cannot be expected to know the IA process in detail, and our system has to guide them through the creation of meaningful IA-relevant contributions. The standard approach to guide users through a web application is to provide a “wizard”, i.e., a sequence of web pages that collect the information required in consecutive steps. Since creating new instances of topic type “Measure” is one of the most frequently used functions, our Web 2.0 application implements a wizard that asks the participants for all information required for this purpose. This includes statements how well the new measure complies with the principle of proportionality, and which objective the measure is intended for. We have identified the following consecutive processing steps the participants must fulfill to create meaningful instances of topic types and relationships between them regarding new measures: 1. Specify a new measure. 2. Link the measure to an existing objective. Alternatively, create a new objective and create a link to the measure. 3. Provide a statement on how the measure complies with the principle of proportionality, i.e., if (a) the measure is suitable to achieve the objective, (b) there is a milder measure available to achieve the same objective, (c) the measure is in proportion to the importance of the objective. Besides the wizard for new measures, other wizards might be useful as well. For example, another frequent action is to specify new objectives. Here, the participants identify promising measures and link experiences to indicate that these measures might help to meet the objective. **Example 3:** Figure 4 shows the first step of our wizard where the participants provide details about a measure. With the information obtained from this page, the wizard creates the instance of the topic type Measure shown in Figure 2. ### 3.2.1 Editing/Voting Participants might provide contributions that are misleading, contradicting, or biased by personal attitudes. This limits the number of participants of current IA, as experts are needed to filter these contributions. In order to tackle this issue, Requirement R4 calls for mechanisms that allow the community to control the quality of the contributions provided by the participants. To be precise, we require a way to determine if updates to the topic map should be applied or not. For example, if a participant tries to correct the description of an objective but fails to make a significant improvement, this change should not be applied. The system to deal with changes also has to protect contributions from vandalism, e.g., participants deleting information on a contribution they do not agree with. Every valid point of view should co-exist, to let experts see all aspects of a law. When looking at the Web, two approaches are commonly used for voting: Either a group of designated users makes the decision, i.e., they moderate what is applied and what is not, or the community at large takes a vote. In order to unburden the experts from structuring and examining all information provided, we have opted for voting. A simple voting mechanism is to let the community provide binary votes in favor of or against a certain change, and to make the change persistent only if the majority of users have voted positively in a certain period of time. This voting mechanism is applicable in large community-driven databases, e.g., MusicBrainz. We have implemented this voting mechanism for any action that modifies existing information, i.e., 1. Change the name of a topic. 2. Change the description text of a topic. 3. Dissolve a type-instance relationship. 4. Change the name of an association. 5. Change the description text of an association. Changes that generate new data objects, e.g., creating new topic instances of “Measure”, are applied immediately. For each modification that invokes the voting mechanism, our application generates an Edit. The edit describes who wants to change which element at what time and in which way. Figure 5 shows an edit open for vote. Figure 5: A change that is up for a vote. The upper part displays information on the modification, the lower part shows the interface elements that let the users vote. Other voting approaches might be applicable as well. For example, Discogs uses an approach where designated moderators control any changes. For more sophisticated voting mechanisms see [1]. ## 3.3 Support for Experts The task of the IA role expert is to draw conclusions on contributions and statements created by participants, i.e., experts have to search, browse, structure, augment and analyze the information gathered. As the experts might also be specialists of the policy domain, functionality available to any participant is also available to them. Experts also make corrections, e.g., if they find misclassified contributions. Furthermore, experts might have to adapt the knowledge representation to the needs of future IAs (cf. Requirement R6), i.e., they need to modify each aspect of the building blocks Topic, Association and Occurrence. Thus, our web application has to provide the experts with the following set of methods: - Browse the topic map. - Search, create, modify and delete - topic types, association types, occurrence types, - topics instances, association instances and occurrence instances. - Generate overviews, i.e., present lists of all topics together with the types and instances associated. - Execute queries that perform recurring IA analysis steps (cf. Section 2). The next subsection will describe the queries needed for analyses. ## 3.4 IA Analyses As Section 2 has shown, the IA process includes analysis steps, e.g., an expert has to check for facts that indicate or disprove the suitability of a measure regarding a certain objective. Our knowledge representation allows to implement those analyses as pre-defined queries on the topic map. With our knowledge representation, there is an association linking objective, measure, and facts (e.g., a scientific study). Thus, a query can identify all objective-measure associations which are not linked with a fact. Together with experts from a law firm we have identified four queries which are repeated on a regular basis in any IA process: - **What are the objectives?** (Step 2 of the six analytical steps described in Subsection 2.1.) - **What are the measures intended to accomplish the objectives?** (Step 3.) - **What are the facts that support a measure-facilitates-objective relationship?** (Steps 3 to 5.) - **What are the measures where no facts indicate that the measure will accomplish its objectives?** (Steps 3 to 5.) Note that these queries do not cover every information need. However, any other information can be obtained by browsing the topic map, and the web application can be extended for further queries (Requirement R6). --- 7[http://musicbrainz.org/](http://musicbrainz.org/) 4. EVALUATION The objective of our evaluation is to find out the following: (1) Does our collaborative approach provide good support for experts gathering and structuring information from participants? (2) Is the functionality implemented sufficient for the tasks participants and experts have to carry out in the IA process? Since IA is a qualitative method to systematically collect evidence on complex topics, its outcomes cannot be measured easily in quantitative terms (cf. [16]). Instead, we need a qualitative evaluation. We have used the Structured-Case [4] (SC) method to improve and evaluate our approach. SC is an iterative process where each iteration results in a critical reflection of the issues learned, together with a set of improvements for the next iteration. The iteration stops if the informative value of the results is sufficient. In the following, we briefly sketch the fundamentals of SC, and we provide a description of our first and second SC iteration. 4.1 The Structured-Case Method SC represents the iterative nature of building theory from qualitative data by using a formal process model consisting of conceptual frameworks and a pre-defined research cycle. The conceptual framework represents the aims of the researcher, his understanding and theoretical foundations. The research cycle consists of four stages which the researcher repeats until his research goals are met: - **Plan**: Develop a course of action (plan) by deriving appropriate use cases, target groups and evaluation methods from the conceptual framework. - **Data Collection**: Gather and record data according to the plan. Do some initial analysis to find out which supplementary information has to be collected. - **Analysis**: Structure and analyze the data collection guided by the concepts of the conceptual framework, and develop new concepts and themes. - **Reflection**: Critically review the research process and scrutinize the analysis results in order to avoid to only confirm findings that have been expected. Adapt the conceptual framework for the next iteration. We have conducted two SC iterations. During the first SC iteration, we have developed an operational prototype, together with a concept for a thorough evaluation. The second iteration evaluates our approach and provides information on future work. 4.2 The Initial Conceptual Framework We start the SC process with a conceptual framework that reflects our central idea of enhancing the IA process by Web 2.0-style user collaboration. We can declare success if typical IA users regard our approach positively, the requirements we have identified are meaningful for IA, and our approach meets these requirements. Thus, our conceptual framework builds on the working hypotheses that - **H1**: experts appreciate obtaining a structured set of contributions from a large community, instead of structuring the evidence and facts gathered from a comparably small set of participants by hand, - **H2**: the requirements identified in Section 2 are sufficient for a collaborative IA web application, and - **H3**: the knowledge representation as well as the components of our approach, as described in Section 3, provide the experts and participants with the means to collect and structure IA information. Furthermore, the framework includes literature on the fundamentals of the IA process and case studies carried out with IA (cf. Section 2). 4.3 The First Research Cycle: Preparation Before describing our evaluation in the second research cycle, we will briefly summarize our preparations, which were done in the first cycle. This cycle has to identify the means for a thorough evaluation. Furthermore, it has to refine our prototypical implementation for this purpose. Therefore, we have conducted a pilot study with IA experts from a law firm that develops software to manage environmental obligations of large companies. Thus, our experts are familiar both with IA and software solutions for legal problems. We have based our pilot study on test data from the proposal for a EU directive on soil protection [6]. We summarize the results of the first SC iteration as follows: - Our approach seems to be sound, and our prototype is ready for an evaluation at a larger scale. - The user interface of the wizard needs some changes to ensure that information flows are transparent for the participants. - Jurists with an expertise in IA are a relevant target group for our evaluation. - Because it is very well documented, the draft of the German Renewable Energy Sources Act [13] is an appropriate test case for a thorough evaluation. 4.4 The Second Research Cycle: Evaluation After having identified a test case and a target group for evaluation, and after fine-tuning our prototypical implementation, we have started the second SC iteration to evaluate our approach. **Plan.** The target group of our collaborative IA approach are experts gathering evidence on policies. Thus, we have to evaluate our approach with individuals who are likely to spend some time on training regarding the IA process and the use of IA tools. For this reason, we evaluate our approach with three lawyers who are familiar with our prototype from the last SC iteration. Our method of research is an IA case study, followed by a questionnaire and interviews. According to the results of the first SC iteration, we will use a draft of the German Renewable Energy Sources Act [13] as a test case. This act makes it obligatory to use renewable energy sources when constructing new buildings or carrying out major restorations on old ones. In order to find out if our collaborative IA approach provides good support for experts when gathering and structuring information from participants, our experts have to find out if the information provided is represented properly, and if the underlying knowledge representation allows to execute an... IA analysis based on the various statements and the relationships between them. Therefore, it is part of our plan to let our lawyers act both as experts and as participants. In particular, we assign each lawyer one of the roles “house owner”, “representative of the association for the protection of tenants” and “environmentalist”, and we let them gather evidence and facts according to their roles. The roles provide controversial opinions on this specific draft. For example, house owners are likely to be unwilling to spend money on renewable energy sources, while environmentalists strongly prefer renewable energy sources over the traditional ones. After collecting information and opinions according to these roles, the lawyers had to follow the “experts’-role in the subsequent IA processing steps, as described in Section 2. **Data Collection.** We have scheduled one day for an introduction to the test case, two weeks to execute the IA test case with our prototype, and a number of days to do interviews and answer an online questionnaire. Our questionnaire consists of 30 questions targeting at the usability of our prototype in the IA process as well as at the appropriateness of our requirements and at the power of our knowledge representation to structure IA-relevant statements. The questionnaire covers the following seven aspects: **General questions:** What is IA? How much experience do you have with IA? Have you participated in IAs so far? **IA test case:** Describe the role you have played in the test case and state the user name used to log into the web application. **Structured contributions:** How do you rate the structured knowledge representation in general? How useful is the set of topic types and association types we have devised? Name advantages, disadvantages and open issues. **Guidance for participants:** How well does the interface for participants – and in particular the wizard – guide through the IA process? Are there important aspects of IA that are not covered by the wizard? **Editing/voting:** Do you think the editing and voting system is effective in ensuring high-quality contributions? Did you see any flaws in the voting system? **Analysis:** Do the pre-defined queries provide all information needed for the IA analysis? If this is not the case, what is missing? **Usability:** How do you rate the usability of our web application for the roles “expert” and “participants”? Which way of entering contributions do you prefer, and why? In order to obtain meaningful indicators, we let our experts either answer in plain text or provide marks on a five-point Likert scale. Since we were interested in objective results, we put much attention to ensuring that our questions do not imply a certain answer. In addition, we have conducted extensive interviews with each expert to learn the rationale behind their answers, e.g., why certain features of the application have been regarded as more useful than others. **Analysis.** After having executed the data-collection step, we have analyzed the information obtained. In the following, we summarize the findings compiled from the interviews and the data submitted through the questionnaire. We relate these findings to the working hypotheses from our initial conceptual framework. **H1: Structured set of contributions** Our experts have appreciated to obtain structured information from the very beginning. In particular, they have found it very useful to find related information by browsing the topic map, instead of having to file large amounts of unstructured data manually. However, our study participants did criticize that the web application does not enforce this structure. For example, some participants wrote all related information into the description field of a measure, instead of generating and linking a topic type “Impact”. We conclude that our knowledge representation is sound, but more of the functionality needed by participants should be implemented in a wizard style. **H2: Satisfaction of requirements** We have to verify if our study participants agree with our requirements, i.e., if the implementation of our requirements results in a useful IA application. **R1: Roles “participant” and “expert”** The differentiation between participants and experts has been regarded as very useful. This is important, as the functionality for experts is complex and needs a comprehensive understanding both of the IA process and of the knowledge representation. We cannot expect this from participants. **R2: Knowledge representation** The study participants agree with our structured representation of the information collected and evaluated during the IA process. In particular, they did not want to adapt the knowledge representation, and they did not find any IA-relevant information that could not be mapped to the knowledge representation. **R3: Analyses** The queries pre-defined to ease repetitive IA analyses were deemed useful or very useful on the Likert scale of the questionnaire. On the other hand, some experts mentioned that analyses beyond these pre-defined queries were complicated, because they required going through several steps using the interface for experts. **R4: Collective intelligence** The idea to let the community review changes in order to improve the contributions of the participants was considered to be useful or neutral. Our study participants found it confusing to have some changes accepted or rejected without being able to know who voted in favor or against the modification. **R5: User guidance** The wizard to guide inexperienced users through creating topics and associations in a sequence of steps was considered useful to very useful. This is in line with our findings regarding Requirement R1. **R6: Extensibility** Because our experts did not find missing IA topics (cf. our findings for Requirement R2), they did not use any functions to modify and extend the topic map. However, our experts agreed that it is very useful to be able to adapt the knowledge representation if necessary. [9http://www.ipd.uka.de/~heidingc/IA/Questionnaire/] H3: Applicability of our approach Since our evaluation was based on a real test case, we have shown that any information from an entire IA process can be mapped to our knowledge representation. Our study participants were able to make their contributions with ease, and they could perform all IA analyses and processing steps in the role of experts. To sum up, the questionnaire and the interviews have revealed that the user interface needs to be more intuitive. Since experts might need to modify the knowledge representation according to the needs of future IAs, we have to put more attention to a thorough training for experts. Furthermore, in order to avoid faulty entries made by inexperienced users it is important to implement more functionality as wizards. In particular, this is true because even the experts found the concept of a topic map difficult to understand. Finally, a refined version of our application should make the voting system more transparent. On the other hand, our approach seems to be sound and should serve as an important contribution towards involving a large community of participants into future IAs. Our prototypical web application is operational, and was deemed helpful by experts for IA. Discussion. When reviewing the first two research cycles, we find that our method of research has provided us with an operational prototype, a comprehensive model of the knowledge gathered during IA, and strong indications that our approach helps to involve a large community of users into the IA process. The next research cycle must involve tests on a larger scale, i.e., with more users providing more contributions. This is important, since our small number of study participants has been insufficient to stress the collaborative editing/voting mechanism. Furthermore, the next research cycle should force the experts to use the functionality provided to modify the topic map. 5. CONCLUSIONS Impact assessment is an important tool for the legislator to pass elaborate laws, policies, norms or regulations. Experts collect a lot of information from specialists of the regulation domain, stakeholders and individuals concerned. Currently, IA is a manual process where experts have to evaluate unstructured sets of opinions, expertises or statements. In this paper, we have introduced a collaborative, web-based approach that supports the collection and evaluation of IA-relevant knowledge. In particular, we have analyzed the requirements, the nature of information important for IA, and the contributions collected during public IA consultations. Based on this information we have developed a knowledge representation to classify and link statements, expertises or opinions. The contributions stored in our system are based on the structured-case method. According to our evaluation, our system might be an important step towards involving a large community of users in future impact assessments. 6. REFERENCES
{"Source-Url": "https://www.ipd.kit.edu/mitarbeiter/buchmann/pdfs/heidinger09impactassessment.pdf", "len_cl100k_base": 8589, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27930, "total-output-tokens": 9921, "length": "2e13", "weborganizer": {"__label__adult": 0.0007023811340332031, "__label__art_design": 0.003021240234375, "__label__crime_law": 0.043212890625, "__label__education_jobs": 0.05975341796875, "__label__entertainment": 0.0004169940948486328, "__label__fashion_beauty": 0.0004260540008544922, "__label__finance_business": 0.006420135498046875, "__label__food_dining": 0.0006494522094726562, "__label__games": 0.0017719268798828125, "__label__hardware": 0.0021533966064453125, "__label__health": 0.0019626617431640625, "__label__history": 0.003025054931640625, "__label__home_hobbies": 0.0005412101745605469, "__label__industrial": 0.001857757568359375, "__label__literature": 0.001953125, "__label__politics": 0.018798828125, "__label__religion": 0.0009179115295410156, "__label__science_tech": 0.287841796875, "__label__social_life": 0.0006928443908691406, "__label__software": 0.1993408203125, "__label__software_dev": 0.362548828125, "__label__sports_fitness": 0.0004088878631591797, "__label__transportation": 0.0011539459228515625, "__label__travel": 0.0005125999450683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46312, 0.01954]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46312, 0.50215]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46312, 0.91593]], "google_gemma-3-12b-it_contains_pii": [[0, 4643, false], [4643, 10333, null], [10333, 16386, null], [16386, 19615, null], [19615, 24296, null], [24296, 29104, null], [29104, 35006, null], [35006, 41131, null], [41131, 46312, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4643, true], [4643, 10333, null], [10333, 16386, null], [16386, 19615, null], [19615, 24296, null], [24296, 29104, null], [29104, 35006, null], [35006, 41131, null], [41131, 46312, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46312, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46312, null]], "pdf_page_numbers": [[0, 4643, 1], [4643, 10333, 2], [10333, 16386, 3], [16386, 19615, 4], [19615, 24296, 5], [24296, 29104, 6], [29104, 35006, 7], [35006, 41131, 8], [41131, 46312, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46312, 0.0283]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b32a9bc20b23f33471a35ca30103fbf6c734c835
Human Aspects in Software Architecture Decision Making A Literature Review Antony Tang Swinburne University of Technology Melbourne, Australia atang@swin.edu.au Maryam Razavian Technische Universiteit Eindhoven Eindhoven, The Netherlands m.razavian@tue.nl Barbara Paech, Tom-Michael Hesse Heidelberg University Heidelberg, Germany {paech,hesse}@informatik.uni-heidelberg.de Abstract—Despite past efforts, we have little understanding and limited research efforts on how architects make decisions in the real-world settings. It seems that software architecture researchers make implicit assumption that decision making by software architects can be a rational and prescribed process. Such an assumption is disputed in other fields such as economics and decision research. This paper studies the current state of software architecture decision making research in terms of human behaviors and practice. We carried out a literature review on software architecture decision making. We classified papers into decision making behavior and decision making practice and identified the research relationships between them. We found that decision making is a mental activity. Research into the behavioral aspects of software architecture decision making for incorporation into architectural design practices is required. We suggest three research topics on human aspects to improve software architecture practices. Keywords- software architecture; decision making; human behavior; methods and tools; I. INTRODUCTION For nearly two decades, there has been much interest in the software architecture community to explore how design rationale [1-3] and software knowledge management [4, 5] help software architecture design. The basic premise of such approaches is that knowledge and rationale give additional information and argumentation in designing. However, the decision making mechanism is not very well understood in software architecture. Researchers in the software architecture field may have overlooked factors such as biases and group dynamics that influence software decision making [6], and we want to investigate what has been done. Software designers and developers make decisions regularly, even though they may not be aware of how they make decisions. For instance, they make decisions on what architecture style to use, how to design an API, or what methods should be included in a class. Software architecture decisions are more than the synthesis of knowledge and information into software outcomes, justified by some design rationale. The process of software architecture design involves many stakeholders and a wide range of activities that includes defining goals, defining and clarifying requirements, defining software structures at abstract and code levels. All these activities involve decision making [7]. Do software architects make good decisions when given the right information? Is one way of decision making better than another way? Is there a better way, under given circumstances, of making good decisions? We generally know that sound decision making underpins the quality of good software systems, but we do not really know how to achieve sound decision making. Classical economics theories make assumptions about how consumers make choices from optimal beliefs and rationale [8]. Researchers later found that other forces, such as bounded rationality can influence decision making [9]. The long-held assumption of having full market knowledge and rational choices to optimize economic decisions does not hold anymore. Consumer rationality and cognitive biases need to be taken into consideration in economic theories, thereby forming the basis of behavioral economics [8]. Decision researchers suggested that it is not obvious how decision makers make decisions. Sometimes decision makers themselves cannot tell how they make decisions. So it is important to investigate how decisions are made and how to improve decision making [10]. Software architecture researchers have investigated how to aid decision making. However, some architects and researchers make the implicit assumption that software design can be a rational and explicit process. This assumption is questionable. In software engineering generally, human aspects of decision making have been recognized as important but not often considered [11]. Software architecture decision making is more than mechanically applying some prescribed methods. Humans are involved and humans make decisions in different ways. We need to understand the human aspects of decision making and to gain more insights to how software architects design. But first, we wish to do an inventory of how much we know about software architecture decision making. To do so, we study empirical research works on decision making. We ask this research question (RQ): RQ: What research on human aspects of software architecture decision-making has been done and how does it reflect on software architecture decision making? Rationale: Human is a first-class entity in software architecture decision making driven by some internal behavioral processes. Decision making is influenced by engineering processes and methods that are practiced. Our research focuses on these two aspects. We select empirical research papers in this study because the study of human aspects requires empirical evidence and cannot be anecdotal [12]. This study allows us to (a) gain an overview of the subject; and (b) analyze the research directions in this area. Our research approach is to analyze software architecture decision making research papers. In Section 2 we describe our literature review and analysis procedure. We summarize the research results in Section 3. We interpret these findings and discuss the implications for future research in Section 4. II. LITERATURE REVIEW A. Literature Identification and Selection This research study software architecture decision making research. We take a broad view of software architecture that includes software requirements and design. We started by collecting software decision making research papers that are known to us (Step 1 in Fig 1). From these papers, we identified eight venues that are likely to contain such research works (Step 2). We considered them as the primary sources. These are: (a) Journal of Information and Software Technology (IST); (b) Journal of Design Studies (JDS); (c) Workshop on Sharing and Reusing Architectural Knowledge (SHARK); (d) IEEE Software; (e) Journal of Systems and Software (JSS); (f) Quality of Software Architecture (QoSA); (g) Working IEEE/IFIP Conference on Software Architecture (WICS); (h) European Conference on Software Architecture (ECSA). Seven of the eight venues are where software architecture researchers often publish their works with the exception of JDS. We picked JDS because it has a focus on design and there were a number of studies of how software designers think in a special issue [13]. We manually traversed the past issues of these eight software venues to find relevant papers (Step 2). We looked through all issues for the 11 years from 2005 to 2015. The reason for selecting 2005 is because at that time design rationale study started to take off in the software architecture field with prominent research papers such as [2, 14]. We started this research in early 2016 and hence we finish our literature review by end of 2015. We retrieved research papers from these primary sources through manually reading the paper titles and abstracts from these eight venues (Step 3). We examined the title and the abstract and looked for key phrases like “decision”, “design decision” or “decision making”. There are also secondary sources where we found relevant research papers. With the results from our search in the primary sources and the known papers (Step 4), we used a snowballing technique (Step 5) [15] to find relevant papers from citations. We also included papers that we know about before this review, some of them are well-known papers dated earlier than 2005 (Step 1). These papers are from Empirical Software Engineering, IEEE Expert, Communications of ACM, ACM Computing Survey, International Journal of Human-Computer Interaction, IEEE Transaction of Software Engineering, Agile Conference, Automated Software Engineering and book chapters (Step 5). We ended up with a preliminary set of seventy-seven (77) papers from both sources. During Step 5, we also found twelve (12) research papers that are relevant to software decision making, but the subject of the study is not software development. Based on this set of papers, we selected research papers to be included in our analysis through applying selection criteria (Step 6). First, a selected paper must study one of the two subjects: (a) factors that affect software decision making, especially human factors; (b) software decision making practice in a software development environment. Second, a selected paper must have conducted research to yield empirical results. This criterion eliminates papers that are anecdotal or survey in nature. Third, if a paper does not relate the research results to software decision making directly, the work is excluded from our review. Four researchers were involved in reading the papers. We arranged the reading, selection and coding of the papers such that (a) each paper was assigned randomly and read and coded by two researchers; (b) each researcher had to determine if the paper fits the inclusion and exclusion criteria; (c) each researcher read at least forty-five (45) papers; (d) a researcher cannot code or select the paper s/he wrote. We have finally selected a total of thirty-three (33) papers. Table 1 summarizes the search results. The columns indicate at which stage the papers were identified and if a paper is selected or not. Step x in signifies that a paper from a particular paper source passes the selection criteria. For instance, cell “Step 1 in / IST” shows that the known software decision article (Step 1) in IST have been selected after applying the selection criterion in Step 6. Step x out shows papers that do not meet the selection criteria. S, in each cell is the paper identifier. We coded each paper with a summary and general assessment and details on the mentioned humans aspects and practice: the overall strategy (naturalistic or rational), cognitive aspects (in particular familiarity, expertise, bias), process aspects (decision making task, artefacts, tools, methods, constraints), decision making activities (creation, review or evolution of a decision) and sub-activities (determination, structuring, discussion; explicit/implicit decisions), decision knowledge (such as problem, solution, context, rational, external knowledge and their relations). We also coded the empirical study method, but this is not used in this paper. When the codes of the 2 researchers differ we discussed and made adjustment. B. Limitations We did not carry out a systematic literature review by searching all databases. This means that we may omit papers from other venues (such as CHASE workshop). We are limited in our claims that all software architecture decision making literature is included. We only surveyed the mainstream software architecture research venues that are likely to publish such works. We judge that we have a fair representation of the publications on software architecture decision making, and our method is rigorous. As part of the review, we also gather research papers from other disciplines, notably psychology, cognitive science, and design studies to enhance our understanding of decision making in general. We did not carry out any comprehensive literature search in these other disciplines. We followed the citations from the software papers we found to seek out these useful papers from the other disciplines. The research works from the other disciplines provide ideas and lessons for software researchers, to study software decision research and research methodologies. These papers were referenced in this paper to give us relevant background materials. III. SOFTWARE ARCHITECTURE DECISION MAKING RESEARCH We found thirty-three papers that are concerned with software decision making with empirical evidence. According to our selection criteria on the subject, we first classified whether a paper is focusing on human decision making behavior or decision making process, tools or methods. Respectively, we classified eleven papers into DMBehavior and the other twenty-two papers on software architecture decision making techniques, methods and tools into DMPPractice. These papers generally observe decision making activities or they test process/methods/tools to improve decision making. We further sub-classified these papers based on our coding. We show these two main classes of papers (DMBehavior and DMPPractice) in Fig 2, the number of papers found in each sub-class (i.e. shown within the brackets) and the identified papers in each sub-class. ![Decision Making Behavior and Practice](image) **Figure 2. Decision Making Research Classification** The coding and subsequent classification were based on the main goals and findings of each paper. For instance, S80’s main finding was naturalistic decision making behavior, so we created a sub-classification for this type of research. Some papers have findings that can belong to more than one sub-class. We classify such a paper into one class only based on the main result. This simplification gives us a better view of the current state of research works. There are 6 sub-classes in DMBehavior and 6 sub-classes in DMPPractice. The details of the findings in each class are explained in Section IIIA and IIIB, respectively. A. Decision Making Behaviors Eleven DMBehavior papers studied psychological and cognitive aspects of decision making and they deal with different human thinking aspects. We found, 5 sub-classes according to the primary subjects of these 11 papers. We found 4 papers that study naturalistic and rational decision making. One paper dealt with cognitive bias. Two papers studied Group decision making. Two papers studied cognitive limitation and satisficing behavior, classified as cognitive limitation. Finally, two papers studied mental characteristics and experience, they were classified as mental representation papers. There is a group where no papers were found. It is about decision making behaviors, and we call it behavioral science papers. Behavioral science is one of the psychology areas that are widely studied in management and organizations [10]. There is an awareness and studies of behavioral science and decision making in information system field [16]. In our review, we have found no works that investigate organization behaviors, motivations, or personality with respect to software decision making. The number of behavior science papers shown in Fig 2 is zero. Although no such papers were found in the software architecture field, we report this category because other disciplines have shown that these are important contributing factors to decision making. It would be a major gap in this classification if we omit it. naturalistic and Rational Decision Making - In naturalistic decision making (NDM), people frequently construct explanations of decisions in the form of stories about possible outcomes. Naturalistic approaches to decision making are more contextually embedded, subjective, and stress the roles of identity and unconscious emotions in decision making [6]. Rational decision making (RDM) describes how decision makers think and act based on coherence and rationality. A decision maker optimizes decisions between choices of alternatives in well-structured settings. Kahneman uses the terms System 1 and System 2 thinking. System 1 is fast, instinctive and emotional, and evolutionary very old. System 2 is slower, more deliberative, and more logical, and evolutionary more recent [17]. In this sub-class, we include papers that reference any of the two theories, i.e. System1/System 2 or NDM/RDM. We found 4 papers that base their arguments on either of these two systems of decision making. S52 did a multi case study of agile teams and found that the teams used NDM. S66 studied how software designers explore the problem and solution space and the role of reasoning in decision making. It was found that explicit reasoning helps designers to communicate better and to avoid assumptions in decision making. It was suggested that System 2 helps problem space exploration and considerations of solution alternatives. S80 studied decision making of 25 practitioners. It was suggested that the more structured is the problem, the more RDM is used, and the less structured is the problem, the more NDM is used. S81 conducted three case studies and found that designers do not consistently strive for optimal design solutions, which is a key characteristic of RDM. 2. Cognitive Biases - Cognitive bias is the general term, introduced by Kahneman and Tversky [18], to denote human’s inability to reason rationally. They are cognitive or mental behaviors that prejudice decision quality in a significant number of decisions for a significant number of people [19]. In our study, we found one DMBehavior paper that researched into cognitive bias. In S53, the researchers conducted a controlled experiment to explore the relation between the design process and the framing/presentation of requirements. It was found that framing desiderata as “requirements” negatively affect creativity in design concept generation, indicating that the term requirement may curtail innovation independent of the requirements specifications themselves. 3. Group Decision Making - Software decisions are often made in a group environment. Different group decision making (GDM) tactics such as majority rule, plurality rule and Condorcet winner are discussed in [20]. Two DMBehavior papers studied group thinking in SE. S3 focused on the understanding of how software professionals in groups invoke knowledge in their communication, reasoning and decision making for software effort estimation. Using planning poker, the researchers found that concepts used in estimation are anchored in the software engineering knowledge domain and in historical experiences of the participants. Knowledge is constructed with a basis in social interaction, drawing on specialized concepts from the knowledge domain of software systems in the participants’ efforts to frame and guide the talk. S60 investigated GDM using an online survey with practitioners and researchers in the software architecture community. They found that consensus and brain storming are used in 70% of companies. AHP, Delphi and voting methods are used by 50% of companies. They also identified group decision challenges: (a) groupthink when the group structure is highly cohesive; (b) misunderstanding of goals; (c) conflicting decisions. 4. Cognitive Limitation - Cognitive limitation refers to the limitation in the capacity of short-term memory or unreliable retrieval of relevant information from long-term memory [21]. In decision making, rationality of individuals is limited by the information they have, the cognitive limitations of their minds, and the finite amount of time they have to make a decision [22]. Due to cognitive limitations and other constraints such as time, decisions are made without thorough reasoning, and decision makers satisfice with decisions. Satisficing indicates that a decision maker makes a decision that is good enough to satisfy the goals, and the decision maker seeks a satisfactory solution rather than an optimal solution [23]. Two papers were found. S28 reported problem solving of 8 professional programmers. Researchers observed and analyzed what breakdowns, or difficulties, the professionals encountered. The study found several breakdowns: (a) difficulty in considering all stated and inferred constraints in a solution; (b) difficulty in performing complex mental simulations with many steps or with many test cases; (c) difficulty in keeping track and returning to aspects of problems whose solution refinements has been postponed; (d) difficulty in expanding or merging partial solutions into a complete solution. S70 investigated the extent to which students and professionals look for alternatives in design decision making. It was found that most designers make decisions when they found good enough reasons. No thorough explorations and reasoning were performed and not many options were explored. 5. Mental Representation - When a designer solves a problem, the problem is mentally structured and transformed into a representation of the current situation and goals. An understanding of how goals, problems and other relevant information are arranged and processed mentally gives us insights on how decisions are made. Many studies compare how experts and novices perform the same tasks by comparing their cognitive characteristics and mental representation. Expert mental representations were found to demonstrate superior extent, depth and level of details [24]. Experts accommodate information interconnections and gear decisions towards actions. They view problems as harder than novices in that experts reported needing more information in order to tackle problems. Experts demonstrated more depth and width in the scope of their mental representations. We found two papers that dealt with the issue of mental representation and cognitive characteristics in software decision making. S7 found that software designers use a creative cognitive process to explore and generate in a sequential way, starting with an extensive use of exploratory tasks such as hypothesis testing and functional inference exploration and through that come up with generative ideas like associations and analogical transfer. In decision strategy, software designers used stepwise refinement, in which a complex design problem is decomposed top-down into sub-problems. S62 studied the cognitive characteristics of high software design performers and how they conduct design. High performers typically spent more time on feedback processing and less time on task-irrelevant cognitions. High performers produced more solution visualizations as helpful cognitive tools. High performers verbalized fewer task-irrelevant cognitions than moderate performers. There was only partial support for the hypothesis that high performers spend more time on planning. High performers did not spend more time on problem comprehension early in the process. 6. DMBehavior Paper Summary - Eleven papers were classified into five DMBehavior sub-classes. These five sub-classes dealt with different aspects of human thinking and behaviors. Decision makers typically do not seek optimal results through thorough reasoning and argumentation. Instead they often use naturalistic decision making approach and they are satisfied with sub-optimal solutions. Software architects face cognitive difficulties and limitations in handling highly complex problems. They also suffer from cognitive biases. Comparing with novices, experienced designers are better in exploring problem spaces and they use feedback to guide them design. There are some hints on how experts better explore the problem space and use a more efficient decision making strategy. We have not found any research that deals with decision making from a behavioral science perspective. B. Decision Making Practice We found twenty-two research papers about decision making processes, methods or tools. All of these papers study some aspects of software architecture decision making practices. We call them DMPractice papers. We found six sub-classes. Decision making process contains papers that investigate the steps software architects take in decision making. Decision making methods investigated how a particular method improves decision making. A specific area of group decision making is agile development method. Agile development methods prescribe steps to facilitate a group of developers to reach goals, schedules and consensus. We found papers that describe decision making tools. A number of papers describe how design reasoning can aid decision making. A number of research works focus on the role of knowledge management in decision making. 1. Decision Making Process - We found five decision making process papers. A decision process prescribes certain high-level steps for making design decisions. S27 explored the design process control strategies using verbal protocol study of professional software designers. They found that designers exhibit opportunistic design behaviors (i.e. designers see a potential solution and jump to the opportunity) as well as systematic design behaviors (i.e. designers use breadth-first or depth-first exploration). The decision making process is highly iterative, with interleaved decisions between different loosely ordered levels of abstraction. S51 described the results of a survey of software architects on their decision making process. They investigated the decision making scope, decision classification and the level of decisions. They found that locally scoped decisions such as a component are typically made by individuals but architectural decisions are made by a team. They also found that previous decisions, product life cycle, user requirements, time and personal preferences influence decision making. S61 investigated how technology solutions are being considered by architects during the design process, and how to enhance architectural knowledge management to support technology decision making. S73 presented a survey of the difficulties for making architectural design decisions. Architects consider two to three quality attributes in an architecture decision. The inter-dependencies with other decisions contribute much to the difficulty of decisions. They also found that, generally, good decisions considered more alternative solutions than bad decisions. S26 is a study of interviewing twenty-five system analysts, team leads and senior developers to understand decision making in organization. They found eight factors that influence decision making: company size; business factors; organizational factors; technical factors; cultural factors; individual factors; project factors; and decision scope. 2. Decision Making Methods - The use of decision making methods started in the 1980s. [25]. Four papers were found in this sub-class. S32 suggested using the descriptive forces viewpoint for architectural decisions. The study used 3 case studies with students to show that decisions and forces had to be documented explicitly, which caused the students to think more concretely about available decision alternatives. All groups using the forces views triggered them to consider quality attribute requirements. S33 investigated whether junior software designers benefit from support for rational architectural decisions by the decision viewpoint concept. It was found that the decision viewpoint supported identification of architectural significant requirements (ASR), requirement negotiation, requirement prioritization, discovery of design option and combination of options, tradeoff analysis, validation of options against ASR, and architecture evaluation. S39 studied the influence of risk checklists and the roles on risk perception and decision-making of software practitioners. It was found the practitioners who used the risk checklist identified significantly more risks than subjects who did not use it. S49 analyzed the result of applying Question, Option and Criteria (QOC). It was found that QOC helped expose assumptions, raised new questions, challenged criteria, and pointed to ways in which new options can capitalize on the strengths and overcome the weaknesses of current options. In this study, it was noted that there was a strong tendency for designers to look for evidence to confirm their initial biases. 3. Agile Software Development Method - Cockburn and Highsmith eloquently frame Agile Software Development (ASD) as people centric [26]. Decision making in agile development is one basic aspect of ASD. We found two ASD decision making related research papers. S14 and S15 are from the same authors. They conducted a study involving 43 practitioners in a focus group study. They found a number of decision making issues in ASD: 1) team members rely on Scrum master to commit to a decision, and decisions lack commitment; 2) information is not collected rationally and conflicting priorities exist; 3) behaviors are adapted to group dynamics and team composition is unstable; 4) team members sometimes are uncertain about who should make decisions and they rely on others to make decisions. This behavior affects decision ownership and commitment; 5) collaborative decision making prevents experts from making decisions resulting in lack of empowerment. 4. Decision Making Tools - We found three decision making tool papers. S9 presented gIBIS as a hypertext tool, together with iTIBIS. Using a case study, the researchers compared and reviewed how design rationale might make design decision making more rigorous and error free. The tool facilitated communication between team members because the underlying knowledge helped teams to detect when a conversation had wandered. A graphical representation helped participants to understand the complex issues and devise new solutions. Researchers also identified issues with the tool, such as scalability issues due to capturing knowledge, and the lack of motivation of a designer to capture knowledge used by others. S21 proposed a meta-model to capture decision making constraints with defined semantics and a collaborative architecture decision making approach. The researchers conducted a controlled experiment of the approach and the tool (CoCoADviSE) involving 48 people. They found that automatic enforcement of constraints increases the effectiveness and efficiency of decision making because it takes away the burden of detecting, preventing and resolving constraint violations “manually” from the user. S46 reported two other experiments on CoCoADviSE using students. These experiments found that students needed less time to document design decision using the tool. 5. Design Reasoning - Design reasoning is a process that makes use of information and design rationale to support logical argumentation in decision making. Design rationale is the justifications of a decision. [27] reported that well-structured design rationale is a documentation that helps designers track and evaluate the issues and alternatives being explored. Many design rationale methods have been suggested [28, 29]. Though design rationale provides design justifications, these methods do not show how the process of reasoning is carried out. We identified four papers that deal with design reasoning. S30 was a survey of fifty-three professionals to find out how they reason in real projects. Software architects often searched for multiple design options when making decisions, they consider interconnected decisions. They usually think about the pros and cons of design options but they seldom reject decisions they made before. S31 was a survey of undergraduate students about naïve reasoning for architecture decision making. Students were taught to consider the ASRs and put emphasis on the quality attribute requirements. However, many students did not identify the most challenging requirements, nor did they prioritize them. Students did not relax requirements to yield more design options and they also declared that they preferred well known solutions in favor of unknown alternatives. Also, they did not seem to be aware of limitations and constraints that the solutions impose on other decisions. Students weighted pros and cons of design options, but they did not consciously make trade-offs between requirements, and they neglected to validate the decisions against each other. Students did not seem to be aware of the dependencies and the relationships between architectural decisions. The students quickly came up with a first architectural vision and did not significantly deviate from this vision any more. This is another indicator that students did not critically evaluate their decisions. S67 was a survey on how practitioners think about and reason about design decision and design rationale practices. It was found that the following design rationales were used to support decision making: constraints, assumptions, weakness, cost, benefits, complexity, certainty of design, certainty of implementation and tradeoff. Additionally, design rationales that positively justify a design receive more attention than those negative rationales that explain why the design may have issues. That leads the researchers to suspect that there might be a tendency or a bias towards presenting “good news” rather than “bad news”. S69 explored the effects of design reasoning on the quality of design by comparing two groups of practitioners in a controlled experiment environment. It was found that for junior designers, explicitly stating their design rationale helped improve design quality. By explicitly stating design issues and options, the test group performed more systematic design reasoning and was able to back track their decisions. 6. Knowledge Management - Experience and knowledge play a role in decision making and management of such knowledge can facilitate decision support. Knowledge management encompasses knowledge capture (in terms of documentation), sharing and communication [4]. Four papers were found in this area. S47 identified patterns for service-based integration based on a systematic literature review. The identified patterns are grouped into four decision levels: architecture, platform, integration and application. S35 investigated knowledge sharing between software architects and characterizes their position as decision maker. It was found that architects spent most of their time on making architectural decisions and less time on documenting the decision results. S77 presented an interview-based case study of practitioners about design decisions and their documentation. In documenting architecture decisions, architects classified design decisions according to granularity, scope and impact. Low-level decisions with a local scope are often called design decisions or implementation decisions, whereas high-level decisions with a global scope are typically referred to as architectural decisions. S19 introduced valued-based documentation of design rationale. Researchers identified useful design decision information such as issue articulation, design decisions, requirements, position and alternatives, arguments, constraints, assumptions, related decisions, status, related principles, artifacts and notes. This knowledge serves future decisions. 7. DMPractice Paper Summary - A number of observations arise from the study of decision making practice papers. First, a number of papers have identified that decision complexity is one of the main issues. To remedy this problem, they propose methods such as capturing decision chains and visualization tools to help software developers. Second, designing and decision making do not follow a prescribed process. Architects are opportunistic and their design focus shifts as they move through design problems and solution spaces. Third, architects can fixate on interim decisions that have been made, and do not change their decisions despite arrival of new contradicting information. Fourth, when working together with a method like ASD, issues such as decision deference and indecision can arise. Whilst most DMPractice papers have proposed some process, methods or tools to aid decision making, almost all the papers in this class do not discuss the contributing human behavioral issues. IV. DISCUSSIONS Based on the literature review, we make observations on the overall research scene in software architecture decision making; we discuss what we have learned, and we identify new research opportunities. A. Few Human Aspect Research Works Lenberg et al. suggest that even though the software engineering field recognizes the importance of human aspects, the main research focus has been on technology [11]. We found similar phenomena here. Out of the thousands of papers published in the selected venues over 11 years, we only found thirty-three papers on software architecture decision making that satisfy our selection criteria. This is a smaller number. In these thirty-three papers, only eleven papers deal with human behaviors, whereas twenty-two papers studied process, methods, techniques or tools. It seems to indicate that the emphasis is on methods and tools rather than decision behaviors. We argue that decision making practices are rooted in the way software architects think and act. In order to improve software architecture decision making practice, it is necessary to carry out more studies on decision making behaviors. B. Symbiotic Relations of Behavioral and Practice Research A software project is typically built by many people having differing personalities and differing skills, working in a physical environment within an organizational culture [26]. S10 found that many managerial decisions are unconscious with many cognitive biases. If we are unaware of these unconscious cognitive activities of decision makers, we may not realize that these issues could adversely influence the execution of a development method. DMBehavior works provide fundamental and important knowledge that underpins software architecture decision making tools and techniques. DMPractice papers focus on software development processes, methods and tools that improve decision making behavior. Observations of subjects in situ of a decision process provide insights on how software developers work in their specific environments and context. Fig. 2 illustrates a symbiotic relationship. Both research areas are necessary to provide a complete picture to improve software architecture decision making. For instance, S32 and S33 propose different viewpoints to shape decision making. These methods describe how software developers can design better in a certain context. Software developers may use such a decision making method to overcome human issues such as cognitive limitations through better focus and tool support. Both research areas can mutually benefit by leaning on each other. There are many research opportunities to further investigate behavioral decision making in software practice. For instance, it would be interesting to see how improvements within design reasoning or knowledge management impact on cognitive biases, cognitive limitations and satisficing behavior. Also, it would be interesting to understand how to measure or identify the extent of cognitive biases and limitations in software architecture decision making. C. Behavioral Software Architecture Decision Making Research Topics From the literature review, we have noticed that many human behavioral issues have not been attended by software architecture researchers. These issues are fundamental to formulating process, methods and tools to aid architects in their practices. We summarize them into three research topics. Decision Making Heuristics. Software design complexity increases as requirements become more interrelated, technologies become more advanced and the needs of customers grow and diversify. Software architects naturally employ decision making heuristics in such an environment. It was found that 50%-70% of management decisions are unconscious in general. Anchoring-adjustment, availability heuristic, representatve heuristic and moral judgments all play a role in decision making [30]. For instance, an architect may choose to explore one particular potential solution and then grows to love it, subsequently ignoring other potentially good solutions (i.e. anchoring and not changing). It has been suggested that intuitive (unconscious, system 1) and rational (conscious, system 2) processes complement each other in decision making [17, 31]. In this review, a number of papers such as S80 show that software professionals often use naturalistic decision making and sometimes rational decision making. The choice of decision heuristics is often implicit, but it influences decision outcomes. In [30], Crowder listed decision heuristics used by senior managers, and each heuristic comes with potential biases. We tabulate some of these decision heuristics and our interpretation of software biases in Table II. The list of decision heuristics in Table II is likely not all the heuristics there are. We need to identify them and learn more about them. At this stage, we understand that decision heuristics and decision making behavior naturally occur. How they are used by architects can produce different results, some better and some worse. Our question is: What decision making heuristics can software architects use to cope with architecture design complexity? Currently we know little about this area. Further studies of decision making heuristics, the potential issues and counter measures can be beneficial in providing decision making mental tools. **TABLE II. DECISION MAKING HEURISTICS AND POTENTIAL COGNITIVE BIASES (ADAPTED TO SOFTWARE ARCHITECTURE)** <table> <thead> <tr> <th>Decision Heuristics</th> <th>Potential Biases in Software Development</th> </tr> </thead> <tbody> <tr> <td>Anchor Adjustment</td> <td>Designers fixate on an initial software architecture design and unwilling to consider a better alternative</td> </tr> <tr> <td>Availability</td> <td>Designers make decision on what heuristics/knowledge is immediately known to him, instead of exploring unknown solutions</td> </tr> <tr> <td>Representative</td> <td>Judging a preconceived scenario as representative of a general situation. Designers sometimes guess whether a certain use case scenario is a general scenario and how often it happens, then design software to cater for that.</td> </tr> <tr> <td>Moral</td> <td>Designers make decision based on what one thinks is right. They judge whether they should design software that benefits end user or the company they work for, especially if the goals contradict.</td> </tr> <tr> <td>Elimination by Aspect</td> <td>Decision makers focus on one aspect and eliminate alternatives that do not have this aspect. Software developers may eliminate a design for performance if the prime focus is security.</td> </tr> </tbody> </table> **Mental Representation and Limitations.** Langley et al. suggest that decision making is not as structured as some researchers theorize, decision making can be “dark and tangled” [32]. Parnas and Clements suggest that software developers do not always design systematically. They fake rationality by providing design rationale after the fact [33]. Guindon et al. find that software designers do not follow a structured design process. They observe that designers are opportunistic and can veer off to areas that they are most interested in at the time [34]. Björklund suggests that experts use mental representation to associate information and tackle problems [24]. Software developer’s mental capacity is limited by memory capacity and processing power and bounded rationality [9]. All these works point to some human limitations. S28 describes the difficulties on handling design complexity, whilst S70 describes the satisfying behaviors of developers. In order to overcome some of these human limitations, S7 suggests that a creative process of exploration, hypotheses testing and problem recognitions are important. S62 discuss feedback processing, solution visualization and task-irrelevant cognitions. From these discussions, a number of research questions are worth asking: What steps do we take to improve mental capabilities for better software architecture design? How do we check cognitive limitations? Can we create better tools and software architectures based on the understanding of expert mental representation? **De-biasing.** Cognitive biases have been found to play a role in general decision making [35]. S53 shows the presence of framing bias in requirements elicitation. Framing bias was also shown in medical decision making [36]. In system development, stakeholders were shown to be biased in many ways [19]. Stacy and Macmillian gave anecdotal examples of cognitive biases, on inheritance and dynamic binding, in software engineering [37]. Vliet and Tang gave anecdotal examples of cognitive biases in [6]. Thus, the question is: How do we de-bias and reduce or eliminate the effect of biases? Keren's framework for de-biasing in medical diagnosis and prescription is an example for reference [38]. In summary, we need to (i) understand the environment that creates biases (ii) study and apply alternative means for reducing or eliminating biases (iii) monitor and evaluate the effectiveness of de-biasing technique(s). Kahneman [35] suggests to use a checklist to reveal fundamental decision making thoughts: (a) is there something to suspect motivated errors or errors driven by the self-interest of the recommending team? (b) have the people making the recommendation fallen in love with it? (c) were there dissenting opinions within the recommending team? Reflection might also be another means to encourage reasoning and check biases. Asking simple questions seems to have an effect on stipulating design reasoning [39]. Other works in software architecture reviews use a rational and systematic approach, based on decision information and rationale to review design decisions [40, 41]. **V. CONCLUSION** Decision making is a unique human activity involving many aspects such as cognition, behaviors and group interactions. In software architecture decision making research, researchers have investigated both behavior and practice aspects of this activity. The factors that influence decision making are complex and intertwining. We wanted to understand the current state of research on software architecture decision making, in terms of human behaviors and practice and how they are related to each other. We also wanted to understand what further research questions we can ask. To achieve these goals, we conducted a literature review of eight different research publication venues, between 2005 and 2015, to search for empirical papers on human aspects in decision making. We classified these papers into two major classifications, decision making behaviors and decision making practice. To aid our analysis, we referenced decision making research works from other disciplines to give us some context. Our main conclusions are, firstly, there are few research works on human aspects in software architecture decision making. We only found 33 papers and there is an apparent lack of knowledge to improve decision making practices. Second, there exists a symbiotic research relationship between decision making behavior and decision making practice. Knowledge from decision making behavior can underpin practice improvements, and knowledge from decision making practice can improve our understanding of decision behavior. Third, three research topics are identified, (a) formulating decision making heuristics to cope with design complexity; (b) providing aids to assist the mental capabilities of software architects to cope with cognitive limitations; and (c) dealing with cognitive biases. For the future, a systematic literature review with a wider scope that includes other software engineering venues such as CHASE workshop and searches beyond the past decade would be valuable. REFERENCES ### Selection of Decision Making Literature
{"Source-Url": "http://se.ifi.uni-heidelberg.de/fileadmin/pdf/publications/2017_-_Tang__Razavian__Paech_et_al._-_Human_Aspects_in_Software_Architecture_Decision_Making.pdf", "len_cl100k_base": 8715, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 35374, "total-output-tokens": 13516, "length": "2e13", "weborganizer": {"__label__adult": 0.0005612373352050781, "__label__art_design": 0.0015954971313476562, "__label__crime_law": 0.0003783702850341797, "__label__education_jobs": 0.003204345703125, "__label__entertainment": 0.00011515617370605467, "__label__fashion_beauty": 0.0002160072326660156, "__label__finance_business": 0.0003554821014404297, "__label__food_dining": 0.0003554821014404297, "__label__games": 0.0010099411010742188, "__label__hardware": 0.0004858970642089844, "__label__health": 0.00042557716369628906, "__label__history": 0.00031948089599609375, "__label__home_hobbies": 8.088350296020508e-05, "__label__industrial": 0.00029754638671875, "__label__literature": 0.0007109642028808594, "__label__politics": 0.0003440380096435547, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.00821685791015625, "__label__social_life": 0.0001132488250732422, "__label__software": 0.00571441650390625, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.0003509521484375, "__label__transportation": 0.0004184246063232422, "__label__travel": 0.00019419193267822263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60111, 0.03608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60111, 0.68397]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60111, 0.93007]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5211, false], [5211, 10932, null], [10932, 15174, null], [15174, 21346, null], [21346, 27524, null], [27524, 33711, null], [33711, 39451, null], [39451, 45765, null], [45765, 52531, null], [52531, 60111, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5211, true], [5211, 10932, null], [10932, 15174, null], [15174, 21346, null], [21346, 27524, null], [27524, 33711, null], [33711, 39451, null], [39451, 45765, null], [45765, 52531, null], [52531, 60111, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60111, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60111, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5211, 2], [5211, 10932, 3], [10932, 15174, 4], [15174, 21346, 5], [21346, 27524, 6], [27524, 33711, 7], [33711, 39451, 8], [39451, 45765, 9], [45765, 52531, 10], [52531, 60111, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60111, 0.04321]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
bc3c28b93757dd82dbdf04f371f55c3bab9ff62d
[REMOVED]
{"Source-Url": "https://www.hs-aalen.de/uploads/publication/file/8494/BMSD15-LNBIP-Oberhauser.pdf", "len_cl100k_base": 8892, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 48969, "total-output-tokens": 12064, "length": "2e13", "weborganizer": {"__label__adult": 0.0003123283386230469, "__label__art_design": 0.0005850791931152344, "__label__crime_law": 0.00031113624572753906, "__label__education_jobs": 0.0014944076538085938, "__label__entertainment": 0.00011104345321655272, "__label__fashion_beauty": 0.0001914501190185547, "__label__finance_business": 0.0016031265258789062, "__label__food_dining": 0.0003540515899658203, "__label__games": 0.0005197525024414062, "__label__hardware": 0.0006799697875976562, "__label__health": 0.0004405975341796875, "__label__history": 0.0003476142883300781, "__label__home_hobbies": 8.594989776611328e-05, "__label__industrial": 0.000568389892578125, "__label__literature": 0.0003993511199951172, "__label__politics": 0.0003190040588378906, "__label__religion": 0.0003757476806640625, "__label__science_tech": 0.07470703125, "__label__social_life": 0.0001188516616821289, "__label__software": 0.0227813720703125, "__label__software_dev": 0.892578125, "__label__sports_fitness": 0.00023484230041503904, "__label__transportation": 0.0006566047668457031, "__label__travel": 0.00022733211517333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49768, 0.03757]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49768, 0.18478]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49768, 0.88853]], "google_gemma-3-12b-it_contains_pii": [[0, 2518, false], [2518, 5804, null], [5804, 7686, null], [7686, 11244, null], [11244, 14556, null], [14556, 16424, null], [16424, 18410, null], [18410, 20368, null], [20368, 22637, null], [22637, 24157, null], [24157, 27332, null], [27332, 29433, null], [29433, 29778, null], [29778, 31426, null], [31426, 33969, null], [33969, 34642, null], [34642, 36050, null], [36050, 38600, null], [38600, 40703, null], [40703, 43525, null], [43525, 47228, null], [47228, 49768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2518, true], [2518, 5804, null], [5804, 7686, null], [7686, 11244, null], [11244, 14556, null], [14556, 16424, null], [16424, 18410, null], [18410, 20368, null], [20368, 22637, null], [22637, 24157, null], [24157, 27332, null], [27332, 29433, null], [29433, 29778, null], [29778, 31426, null], [31426, 33969, null], [33969, 34642, null], [34642, 36050, null], [36050, 38600, null], [38600, 40703, null], [40703, 43525, null], [43525, 47228, null], [47228, 49768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49768, null]], "pdf_page_numbers": [[0, 2518, 1], [2518, 5804, 2], [5804, 7686, 3], [7686, 11244, 4], [11244, 14556, 5], [14556, 16424, 6], [16424, 18410, 7], [18410, 20368, 8], [20368, 22637, 9], [22637, 24157, 10], [24157, 27332, 11], [27332, 29433, 12], [29433, 29778, 13], [29778, 31426, 14], [31426, 33969, 15], [33969, 34642, 16], [34642, 36050, 17], [36050, 38600, 18], [38600, 40703, 19], [40703, 43525, 20], [43525, 47228, 21], [47228, 49768, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49768, 0.09524]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
3fe49b0209c191829bd8a208ee450e9dde49cefa
Peer reviewed version License (if available): Unspecified Link to published version (if available): 10.1145/2883404.2883420 Link to publication record in Explore Bristol Research PDF-document An Evaluation of Emerging Many-Core Parallel Programming Models Matt Martineau HPC Group, University of Bristol m.martineau@bristol.ac.uk Simon McIntosh-Smith HPC Group, University of Bristol cssnmis@bristol.ac.uk Mike Boulton HPC Group, University of Bristol michael.boulton@bristol.ac.uk Wayne Gaudin Atomic Weapons Establishment wayne.gaudin@awe.co.uk Abstract In this work we directly evaluate several emerging parallel programming models: Kokkos, RAJA, OpenACC, and OpenMP 4.0, against the mature CUDA and OpenCL APIs. Each model has been used to port TeaLeaf, a miniature proxy application, or mini-app, that solves the heat conduction equation, and belongs to the Mantevo suite of applications. We find that the best performance is achieved with device-tuned implementations but that, in many cases, the performance portable models are able to solve the same problems to within a 5-20% performance penalty. The models expose varying levels of complexity to the developer, and they all present reasonable performance. We believe that complexity will become the major influencer in the long-term adoption of such models. Categories and Subject Descriptors D.1.3 [Software]: Parallel Programming 1. Introduction HPC is undergoing significant growth as science and engineering are increasingly reliant upon large-scale simulations to support their cutting edge progress. In order to take advantage of supercomputing platforms, scientific codes need to be carefully re-engineered to exploit concurrency. Even after an application has been parallelised, portability can be a major issue, with many parallel programming models tying you to a particular device or platform. Scientific applications are often long-lived and monolithic, meaning that they cannot be easily rewritten to take advantage of modern supercomputing resources, greatly inhibiting their potential [6]. Energy efficiency has become a limiting factor in designing new HPC technologies, leading to a shift towards many-core devices and heterogeneous computing [9]. Many of the world’s fastest supercomputers include a mix of CPUs, GPUs and accelerators, and this massive increase in node-level parallelism means that applications are becoming harder to develop for current architecture, and even harder to future-proof. These factors have created a demand for programming models that will enable scientific applications to take advantage of heterogeneous HPC resources, without having to maintain versions for each device. Importantly, while there are libraries and standards that enable some level of functional portability, they do not necessarily guarantee performance portability [7]. Given the prohibitive cost of rewriting scientific applications and the current rapid rate of change, it is imperative that application developers are well informed when they consider a modern parallel programming model, in order to safeguard their HPC investments. 1.1 TeaLeaf – A Heat Conduction Mini-App TeaLeaf is an open source project that belongs to both the UK Mini App Consortium (UKMAC) [25] and Mantevo project [6]. The UKMAC represents a consolidated national effort to understand modern technologies and algorithms, that is fed into by Warwick University, Oxford University and the University of Bristol, supported and funded by the Atomic Weapons Establishment (AWE). The Mantevo project, run by Sandia National Laboratories, is an award winning collection of open source applications geared towards analysing high performance computing applications. Mini-apps support the investigation of optimisation, scalability and performance portability, free from the limitations imposed by attempting such analyses with fully functional scientific applications. Importantly, this supports research into techniques for optimising such codes that can eventually be transferred into real scientific applications. TeaLeaf, in particular, has just enough functionality to be representative of the performance profile and computational complexity exposed by a production code, whilst maintaining a small codebase that is amenable to experimentation. The program is characterised by two of the seven dwarfs of High Performance Computing [1], Structured Grid and Sparse Linear Algebra. The two-dimensional TeaLeaf implementation contains three iterative sparse matrix solvers: Conjugate Gradient (CG), Chebyshev, and Chebyshev Polynomially Preconditioned CG (PPCG) [2], each of which uses a 5 point stencil to solve the heat diffusion equation using face centred diffusion coefficients based on cell average densities. To ensure numerical stability of a parabolic partial differential equation with a tractable timestep, an implicit method is employed. The explicit solution, though simple to implement is constrained by a timestep that scales as $1/\Delta x^2$ [25]. 2. Programming Models Background Each of the programming models presented in this work require a different development approach, and expose varying levels of complexity and functional portability. Some details of the background, abstract approach and syntax relevant to TeaLeaf are presented. 2.1 OpenMP 4.0 OpenMP is a directive-based programming model that is widely adopted for parallel programming targetting CPUs in shared memory environments. The new standard, OpenMP 4.0 [18], introduces a number of directives that are designed to allow portability to accelerators through offloading [14]. The execution model takes directions from the host to offload computationally expensive operations to an accelerator device [26]. It must be noted that there is currently only limited compiler support for the offloading, which means that, at the time of writing, the TeaLeaf OpenMP 4.0 version can only be tested on Intel Xeon Phi Knights Corner (KNC) devices. The syntax of relevant and new OpenMP 4.0 statements is presented in Figure 1 and discussed below: - **omp target map(direction: array):** This region surrounds a section of code that will be offloaded, while potentially mapping some data onto the device. - **omp target data:** Maps data onto a device for the duration of the scope, allowing multiple target regions to utilise data maintained on the device, avoiding unnecessary data transfers. - **omp update to(variable or array):** Makes a variable or array in both memory spaces consistent by copying to or from the device. - **omp simd:** Although not related to offloading, the simd directive is an important new feature that attempts to force loops to vectorise that wouldn’t usually auto-vectorise, limiting the changes required to enable vectorisation. OpenMP 4.0 is the principal open standard using a directive based approach, and offers a highly usable interface for parallel performance on heterogeneous devices. At the time of writing compiler support is limited, but several of the main compiler vendors are planning to introduce GPU-targetting functionality in the near future. 2.2 OpenACC OpenACC is another directive-based programming model that supports offloading to NVIDIA GPUs and, more recently, x86 CPUs when using the PGI 15.10 compiler suite. Developers can use a selection of directives to inform the compiler as to how optimal code can be generated with minimal changes to a parallelisable code-base. The directives are very similar to those in OpenMP 4.0, and expose similar functionality, the syntax of which is shown in Figure 2 and discussed below: - **acc data copy(a):** Copies a to and from the device at the beginning and end of the scope. - **acc kernels present(a):** Denotes a region of code that is to be offloaded to the target device, where a has already been copied onto the device by an enclosing data scope. - **acc loop independent:** Signifies that a particular loop has data-independent iterations that can be offloaded to a device for parallel execution without internal synchronisation. In order to support finer control over parallelism, it is possible to suggest the way that the iteration space should be decomposed using the gang, worker, and vector directives. 2.3 The RAJA Portability Layer The RAJA programming model is a brand new abstraction layer designed by Lawrence Livermore National Laboratories (LLNL) to improve performance portability of advanced simulation and computing (ASC) codes. The key technical paper by Hornung et al. [7] outlined two core goals: (1) to abstract away “non-portable compiler and platform-specific directives” and other implementation details, insulating application developers, and (2) to make it easier for application developers to tune data layout and memory access for optimal operation on diverse memory hierarchies. They suggest that organising and controlling memory locality is an essential step in porting serial scientific applications to run on parallel architectures, a position consistent with others who have tackled the same problem [22, 27]. Decomposing the problem domains into smaller units allows threads to have improved utilisation of shared data caches, but can lead to non shared data and “domain management operations” saturating the available resources. In principal, RAJA makes it easier to perform chunking, allowing optimisation of instruction and data cache utilisation. An example of the syntax is shown in Figure 3, and there are several abstractions that are foundational to the design of the model, which are discussed below: - **Separate loop body from traversal:** This decoupling makes it possible to choose device-optimal access patterns for a function without altering the loop body. - **Partition iteration space into work units (Segments):** Abstracting access patterns into *Segments*, which fetch data using different access strategies. When different access patterns are required for a single operation, dividing memory access patterns into similar types allows the strategies to be handled separately, potentially in parallel. - **Segment dispatch and execution (Indexsets):** RAJA supports combining the previous features, allowing *Segments* to be aggregated based on type and dispatched for execution using a loop template. *Indexsets* represent a policy, e.g. “Dispatch segments in parallel and launch each segment on either a CPU or GPU as appropriate”. The developer can recouple the core logic to a particular *Indexset* by choosing one of the built-in dispatch functions and passing a lambda statement (C++11 anonymous function declaration) containing the loop body. Internally, the built-in dispatch functions wrap up platform-specific implementations, for instance a CPU-targetting implementation can contain OpenMP code, and a GPU-targetting implementation can use CUDA. Unfortunately, CUDA 7.0 does not currently support offloading lambda statements from host to device, which has slowed RAJA’s GPU development. More recently CUDA 7.5 has added experimental support for lambda-based kernels that can be defined in host code, and the RAJA developers are in the process of writing an NVIDIA GPU targetting implementation based on this functionality. ### 2.4 Kokkos The Kokkos framework is part of the Trilinos project, developed by Sandia National laboratories to provide a modern abstract approach to developing applications that require performance portability. The project emphasises the development of “robust algorithms for scientific and engineering applications on parallel computers”. Ed- wards et al. [4], acknowledge two principal techniques that embody the philosophy of the model: (1) utilising abstraction to perform computation on many-core devices, and (2) leveraging the power of C++ templates to provide portable high performance data layout tuning functionality. The programming model provides a range of generic abstractions that allow the user to create new codes or port existing applications. An example of the syntax is shown in Figure 4 and some of the abstract model and implementation details are discussed below: - **Execution and Memory Spaces:** The library makes a distinction between execution space and memory space to support GPUs and accelerators, which need to have distinct memory from the host CPU. The library handles much of the interaction between these spaces but the developer can move data to and from the spaces using built-in copy methods. - **Data Structures:** Kokkos uses *Views*, which are abstract data types that support mixing dynamic and compile-time dimensions for optimisation, as well as copy semantics analogous with the C++ std::shared_ptr, avoiding complex ownership constraints. - **Functors:** The library utilises C++ class constructs called functors, where the function operator is overloaded and encapsulates the core functional logic. This pattern requires that *Views* are declared as local variables inside the class, and supports customisable reductions of complex types. - **Lambda Support:** Lambda constructs can be used instead of functors to greatly reduce the amount of code required to write each kernel, but CUDA 7.0 does not currently support the feature, limiting its functional portability. - **Parallel Execution:** Two key data parallel execution operations are provided: `parallel_for` which encapsulates iterative execution and `parallel_reduce` which further allows reduction of data using some function (defaults to zero-initialised sum). Underpinning the abstract semantics is an implementation that uses C++ template meta-programming to rewrite the functors into device-specific code. Kokkos is currently able to output pthreads, OpenMP, and CUDA, which supports a good level of functional portability. ### 2.5 OpenCL OpenCL is an open standard for writing applications that can execute on heterogeneous many-core devices, that was released in 2008 [17]. The standard was created to offer a low level but portable abstract model that can support both data and task parallel programming approaches. Given the number of vendor implementations of OpenCL, it is by far the most functionally portable model evaluated by our research. Some key syntax is presented in Figure 5, and OpenCL’s three primary abstract models are discussed below: - **Platform model**: The model represents a host that interacts with some number of devices, with each device containing compute units, which in turn contain processing elements [8]. - **Execution model**: Code that is executed on the host and devices have isolated execution spaces, where kernels represent work that can be completed by a device. Kernels operate within a particular context established by the host code, which connects devices, kernels, program source and variables. Device-specific command queues can be created that allow work to be queued into the device by the host. The queues accept groups of work items called work groups, which are executed on a compute unit, with each work item being handled by an individual processing element [15]. - **Memory model**: The memory model separates the distinct memory regions and objects available to the host and devices that share a context. Similarly to CUDA this includes a distinction between host and device memory, as well as some hierarchy of memory on the devices that relates to the way data is mapped in an implementation-specific manner [23]. Because of this hierarchy of abstraction, OpenCL requires boilerplate code that is not necessarily required by other models. This includes setting up platforms, contexts, command queues, all kernel arguments and managing data transit between host and device. As a consequence, OpenCL is able to support a generic and open standard that is flexible to modern architecture, and can benefit from wide adoption on a range of devices, whilst offering extensive scope for performance tuning. ### 2.6 Compute Unified Device Architecture (CUDA) CUDA is a mature parallel computing platform developed by NVIDIA to allow application developers to offload computation to their own GPUs, and should enable the greatest possible performance on NVIDIA devices. The platform supports C-based kernels and avoids abstracting the GPU architecture, instead exposing it to enforce decomposition of problems for task, data and thread parallelism. Underpinning the CUDA framework is the Parallel Thread Execution (PTX) intermediate representation (IR), and CUDA can be compiled using the `nvcc` compiler for immediate execution or for just-in-time compilation by the CUDA runtime. The CUDA syntax is presented in Figure 6 and the abstract model is discussed below: - **Threading**: Local resources are shared amongst threads, with each processing element of a streaming multiprocessor performing the same operations on individual data elements. - **Kernels and decomposition**: To inform the GPU of which instructions to run, the application developer can create kernels. Kernels can then be structured onto a grid, and the problem domain decomposed into sets of thread blocks that are isolated from each other, each running on an individual multiprocessor and only sharing `global` memory. - **Memory spaces**: There are a number of memory spaces other than `global`, for instance `registers` are the lowest latency memory, accessible only by individual threads, and `local` memory is shared between threads in a particular group. The library provides memory copy operations for moving data to and from the device, and there is additional functionality to map data for direct memory access between device and host for improved data transfer rates. CUDA represents a pioneering and mature platform for writing optimised computation to offload to NVIDIA GPUs, but limits the user to only ever run their code on those devices. It is important to recognise that any parallel programming model targeting NVIDIA GPUs will use the CUDA platform and PTX IR to actually offload computation to the device, meaning that CUDA applications can provide a lower bound for performance on supported devices. ### 3. Design, Development, and Findings Although it is not possible to guarantee that each of the models is perfectly implemented, the ports were individually considered and optimised. Each model was categorised as either cross-platform (OpenCL, Kokkos, RAJA, OpenACC, and OpenMP 4.0) or platform-specific (CUDA, and OpenMP 3.0), and non-portable optimisations were strictly avoided for the cross-platform implementations. In particular, optimisations were chosen that represented the best performance portability across all supported devices, with some consideration for future portability. Importantly, TeaLeaf’s core solver logic and parameters were kept consistent... between ports to ensure that each of the programming models were objectively compared. While OpenMP 3.0 can be compiled natively on the KNC architecture, it is not considered a performance portable programming model and so was used as a best case for performance on the CPU and KNC. Although the implementation of RAJA available to us for this research is unreleased and excludes GPU support, we hope to extend this research to include RAJA GPU results in the future. Further to this, it must be reiterated that OpenCL’s support extends beyond the target devices, in particular for targeting GPUs that are not intended for compute-only purposes. Each of the models have an associated development cost, and expose varying levels of complexity. As TeaLeaf is a fairly regular and structured data-parallel application, it required the use of the core feature set of each model, but didn’t necessarily represent some of the more complicated use cases that will be encountered with large scientific applications. An important trend observed is that all of the programming models focus on node-level parallelism and exclude support for inter-node communications, which is handled with MPI in TeaLeaf. There are restrictions on the languages supported by some of the models, for instance Kokkos and RAJA require that the resulting application code is compiled as C++11. Although the original TeaLeaf application was an OpenMP Fortran 90 codebase, we developed a functionally identical OpenMP C implementation to serve as a starting point for all of the ports. ### 3.1 OpenMP 4.0 We have performed several mini-apps to use OpenMP 4.0, and each time have encountered a performance overhead dependent upon the number of target invocations performed during execution. In TeaLeaf’s case it is possible to wrap the entire solve step in a single target region, achieving a nearly identical runtime to the optimal OpenMP Fortran 90 native implementation. Unfortunately, this pattern has two flaws that made it unsuitable in real codes: (1) MPI communication cannot be handled from within a target region, and (2) it is potentially non-portable to other devices, in particular for targeting GPUs that are not intended to perform complex control flow and data allocations. While we recognise that there is an overhead inherent with the target offload regions, we have not been able to prove exactly what causes it. Our understanding of the target offloading model is that each region is handled synchronously, theoretically leading to stalling around each computation. The recently released OpenMP 4.5 specification includes the nowait directive for target regions, ensuring that a stream of target invocations can be queued on the device for immediate back-to-back execution. We hypothesise that this functionality will have a significant influence on the target overheads. Overall, the complexity involved in developing the OpenMP 4.0 port was low, but required more specialist knowledge than OpenMP 3.0, especially in managing the flow of data to and from the device. One small difficulty we encountered was that the current OpenMP 4.0 standard does not perform deep copies of members of Fortran datatypes inside the map directives. In order to overcome this problem, it was necessary to pass all variables individually to functions that map data onto the device. We believe that this small restriction could present a surprisingly significant overhead for a large application with pervasive use of custom datatypes. Also, the target data regions are currently constrained to lexically structured scopes, which was not an issue for porting TeaLeaf but may not scale well to more complicated applications. This issue is addressed in the OpenMP 4.5 specification with the introduction of the unstructured target enter data and target exit data directives. ### 3.2 OpenACC Our OpenACC implementation of TeaLeaf was completed after the OpenMP 4.0 port, and we found that the two approaches had many similarities. In fact, it was possible to use the OpenMP 4.0 codebase as a starting point, changing the directives but maintaining the same data transitions. Our final design embellished each of the data-affecting loops with the kernels directive, affording the compiler as much flexibility as possible when generating the offloading code. To successfully compile all of the loops as accelerator kernels, it was further necessary to append loop independent to each of the directives, telling the compiler that the iterations can be executed in parallel. Finally, to achieve the best possible performance all of the loops were collapsed, ensuring enough work was available to the target device. The collapse statement certainly improves performance on the GPU, but might make performance worse on the CPU. As with OpenMP 4.0, a data region was created at the highest possible scope, ensuring that data was kept on the device for an entire step of the solver, reducing the amount of data transfer between host and device. Once we had determined the best approach for parallelising an individual loop, the port took little time to implement, presenting similar complexity to the OpenMP 4.0 port. In future we want to take the PGI 15.10 compilers and test how the OpenACC models translates onto CPUs, and discover what level of performance portability is achievable. ### 3.3 Kokkos To port TeaLeaf to use Kokkos, every data-affecting function was wrapped into a functor, which included a template declaration, constructor, overloaded function call operator, and set of local variables. Generally, the simple reductions in TeaLeaf could be handled by the default Kokkos implementation, which initialises the reducee to zero and aggregates the value from all sources. In the one TeaLeaf kernel where a multi-variable reduction was required, it was necessary to write custom initialisation and join functions which further extended the code size of the functor. Also, all communication between the host and device had to be handled with the Kokkos abstract copy functions, necessarily exposing some memory management complexity. Because each functor in Kokkos flattens the iteration space and provides a single index parameter, it was necessary to reform each cell’s spatial location to exclude the halo regions from the computation. Our original implementation ignored the halo cells using a conditional statement within the functor body, but it transpired that Intel MIC native compilation did not optimally handle the local --- <table> <thead> <tr> <th>Model</th> <th>CPUs</th> <th>NVIDIA GPUs</th> <th>KNC</th> </tr> </thead> <tbody> <tr> <td>OpenMP 3.0</td> <td>Yes</td> <td>Native</td> <td></td> </tr> <tr> <td>OpenCL</td> <td>Yes</td> <td>Yes</td> <td>Offload</td> </tr> <tr> <td>OpenMP 4.0</td> <td>Yes</td> <td>Experimental</td> <td>Offload</td> </tr> <tr> <td>Kokkos</td> <td>Yes</td> <td>Yes</td> <td>Native</td> </tr> <tr> <td>RAJA</td> <td>Yes</td> <td></td> <td>Native</td> </tr> <tr> <td>OpenACC</td> <td>Yes</td> <td>Yes</td> <td></td> </tr> </tbody> </table> Table 1. Supported implementations for each model. using namespace Kokkos; // Call from host parallel_for(TeamPolicyDevice<dim.x - 2*halo, halo, kernel>); // Inside Functor KOKKOS_INLINE_FUNCTION void operator()(const int index) const { int team_offset = (team.league_rank() + halo) * dims.y; parallel_for(TeamThreadRange(team, halo, dims.y-halo), [&] (int i, int j) { int index = team_offset + j; p[index] = beta * p[index] + r[index]; }); } Figure 7. Kokkos kernel implementing hierarchical parallelism for re-encoding loop-level halo exclusions. conditions. Collaborators from Sandia National Laboratories proposed an alternative approach using hierarchical parallelism. This solution was incorporated into each performance critical functor, introducing layers of parallelism throughout the code in the form of nested lambda functions. Figure 7 demonstrates two-dimensional hierarchical parallelism added to the functor in Figure 4, and if three-dimensional exclusions were needed, an additional nested lambda statement would be required. When performing a reduction, additional code is needed to critically add the results from each team. This additional control over the parallelism allows the halo exclusion to be encoded back into the iteration space, which is more abstract and well expressed than a loop-body conditional, but does significantly increase the complexity of each call. We believe that a key requirement of models like Kokkos is that they reduce the barrier to entry for scientific application developers wanting to target heterogeneous platforms. They must limit the distance between the development effort required of a basic OpenMP F90/C application and a portable solution written with their APIs. Kokkos can now be written using lambda expressions instead of the templated functor syntax, making the code significantly more succinct, however, the lack of support in CUDA 7.0 meant this improvement could not be evaluated in our research. When using the lambda style, Kokkos presents a convenient and expressive style that abstracts platform-specific complexities, making it a powerful model for new applications using C++11. 3.4 RAJA As RAJA is currently still in pre-release development, our implementation may not be representative of the style that will be required to target GPUs and accelerators. However, for targeting the CPU, the port required little knowledge beyond C++ lambda functions, and involved a similar development effort to OpenMP 3.0. Porting to RAJA required changing all of the main loops to be lambda calls, and creating IndexSets to handle the data traversal. Because RAJA wraps each function’s iteration space into an indirection array, it was possible to exclude the halo boundaries without any explicit conditions or index calculations in the loop body. While this did make each of the lambda calls succinct, the pre-computation of those indirection lists still had to be occur earlier in the application. Given lots of repetitive access patterns throughout an application this would likely lead to a reduction of code when compared to OpenMP, but for cases where data traversal is fairly diverse between functions, this may lead to a bloat of decoupled code that generates the indirection lists. The TeaLeaf application does not require particularly complicated data access patterns, and so our evaluation cannot give much insight into the full power of this feature. We do however believe that, when introducing RAJA into large codebases, careful design will be required to outline exactly where the indirection array initialisation belongs. We did find that it was necessary to create our own implementations of the dispatch functions, to handle situations where we had multiple reduction variables, and for multiple indexing. This flexibility was very useful, but could potentially inhibit long-term portability, as the custom implementations diverge away from the core RAJA implementation over time. Our impression of RAJA is that, given it is not yet released, the philosophy is sound, and its use of C++11 features made porting the application very straightforward. Should RAJA maintain this usability once functional portability is improved, we expect it to represent a desirable approach to developing new parallel applications using C++11. 3.5 CUDA CUDA is well discussed in other research [3, 11, 20], but overall we found that it exposed greater complexity than all of the ports except for OpenCL. In order to port TeaLeaf to CUDA we essentially converted all of the loops into CUDA kernels, and wrote data copying and reduction logic. While this was close in development effort to Kokkos, CUDA was more complex, primarily because it was necessary to create a custom GPU-specific reduction, including reduction code inside all of the individual reduction-based kernels. Assuming a 1D grid of 1D blocks of threads, you also need to calculate a block size and corresponding number of blocks, as well as checking for iteration overspill from within the kernels. Importantly, CUDA offers no portability beyond NVIDIA GPUs, and offers several features that can potentially increase this complexity for potentially improved performance. 3.6 OpenCL Our immediate impression of OpenCL is that it exposed more complexity than the other models, and also required more boilerplate code to handle the abstract model. However, this model allows the framework to support many different architectures, and offer the greatest functional portability of any of the models presented in this paper. It also means that there is a lot of scope for tuning for a particular device, should it be necessary. Once the boilerplate code is complete, the porting experience is not much more complicated than CUDA, requiring some additional abstract code when kernels and buffers are created. One important complication with OpenCL is the reductions as, similar to CUDA, they have to be manually written but, contrary to CUDA, they potentially have to target multiple different devices. In our case, targeting the CPU, GPU and KNC, we would ideally create device-specific reductions for each of them that take advantage of the device characteristics, but this puts the responsibility on the developer and inhibits long term portability. OpenCL 2.0 includes built-in workgroup reductions that can be implemented by particular vendors, and may offer an important improvement for performance portability. To reduce the complexity of OpenCL, there are C++ and Python wrappers, which allow some improvement in the host-code syntax. 4. Mesh Convergence Performance Analysis Each of the ports were tested individually using the same problem parameters for the three solvers: CG, PPCG and Chebyshev. The ### Table 2. Devices and corresponding memory bandwidth (BW). <table> <thead> <tr> <th>Device</th> <th>Peak BW</th> <th>STREAM BW</th> </tr> </thead> <tbody> <tr> <td>Xeon E5-2670 CPU x 2</td> <td>102.4 GB/s</td> <td>76.2 GB/s</td> </tr> <tr> <td>NVIDIA K20X GPU</td> <td>250.0 GB/s</td> <td>180.1 GB/s</td> </tr> <tr> <td>Xeon Phi 5110P KNC</td> <td>320.0 GB/s</td> <td>159.9 GB/s</td> </tr> </tbody> </table> Testing was performed on the Blue Crystal supercomputer at the University of Bristol, and the Swan XC40 supercomputer provided by Cray Inc., using the modern HPC devices listed in Table 2. All of the results presented are for a mesh size of 4096x4096, which represents the point of mesh convergence for the problem, where a larger mesh size would provide no additional scientific information. #### 4.1 CPU The CPU results were collected on dual socket Intel Xeon E5-2670 8-core Sandybridge processors, with 16 threads and thread affinity set to compact. All of the models except for CUDA support parallel execution on CPU architectures. ![Figure 8. Results for dual socket Intel Xeon E5-2670 CPUs solving across a 4096x4096 mesh (lower is better).](image) The pure OpenMP implementations are the fastest options, with the C++ implementation performing worst on the Chebyshev solver, experiencing 15% increased runtime compared with the Fortran 90 version. Our research found that this performance difference occurs for identical TeaLeaf code, depending on whether it was compiled as C or C++, with Intel compilers (15.0.3). Kokkos demonstrates excellent performance across all of the solvers, with at most a 10% penalty compared to the C++ implementation. This is a strong indication of the model’s potential, and ability to output well configured and optimised code. The RAJA port exhibits a roughly 20% penalty for the CG and PPCG solvers, but the Chebyshev solver consistently requires an additional 40% solve time. We hypothesised that, as the use of indirect lists in RAJA precludes vectorisation, the performance could be indicative of the role that vectorisation plays in good performance for the Chebyshev solver. By creating proof of concept RAJA loop implementations that utilised the OpenMP 4.0 simd statement (RAJA SIMD), we were able to improve this performance by around 20% for the Chebyshev solver bringing it in line with the other solvers. The OpenCL CPU implementation suffered from very high variance, with minimum runtime of 1631s and maximum of 2813s across 15 tests. While we do not know exactly why this variation was occurring, we have observed that the Intel OpenCL implementation uniquely doesn’t use OpenMP to handle the CPU parallelism, instead using Intel Thread Building Blocks (TBB). Intel TBB operates a non-deterministic work-stealing scheduler [10], and we have considered that the variability may have been affected by this functionality. If developer directed thread affinity control were possible, we expect this variance could be limited. Lee et al. [12] take this point even further and suggest that affinity control would likely allow enhanced performance tuning of OpenCL in general. Overall, the performance was quite consistent across the models on the CPU and, excepting some minor performance issues, at most a 20% performance penalty is likely to be observed by choosing any of the performance portable options. #### 4.2 GPU The results were collected on an NVIDIA Tesla K20X using CUDA 7.0, hosted on Cray Inc.’s XC40 Swan supercomputer. It is important to re-iterate here that OpenCL is the only option that can also target AMD GPUs, which gives it an advantage over the other models in terms of functional portability. Also, Kokkos uses template meta-programming to re-write the application code into CUDA, and OpenCL and OpenACC directly output PTX code. ![Figure 9. Results for GPU implementations on an NVIDIA K20X solving across a 4096x4096 mesh (lower is better).](image) The performance results show that both CUDA and OpenCL perform almost identically, and achieve better results than the other models. This is a very good result that shows that OpenCL is able to perform exceptionally well on the GPU, matching the non-portable and device-optimised CUDA implementation. OpenACC achieved acceptable results for all of the solvers, with a roughly 30% penalty for CG and 10% for the other two solvers, but it must be recognised that the port was the easiest to develop for the GPU. The Kokkos implementation also exhibits very good performance for the Chebyshev and PPCG solver, suffering less than a 5% performance penalty compared to the CUDA implementation. Unfortunately, the CG solver demonstrates an unexplained performance problem, requiring roughly 50% additional solve time compared with OpenCL and CUDA. In order to test this issue we also attempted to test on an NVIDIA K20c with CUDA 6.5, but saw identical problems with the CG solver. Given the results for the other solvers, we expect that this is a performance issue that could be fixed or improved given further investigation. Some collaboration with Sandia did find that a solution using hierarchical parallelism (Kokkos HP) was able to improve the performance by around 10% for the CG solver. Unfortunately, this was to the detriment of the PPCG and Chebyshev solver, which experienced a more than 20% overhead following the change. In spite of the problem with the CG solver, the results for Kokkos are impressive and demonstrate a good level of performance portability between Intel CPU and NVIDIA GPU architectures. 4.3 Intel Xeon Phi Knights Corner The results were collected using 60 cores with 4 hardware threads (240 threads total) and thread affinity set to compact. Our overall impression is that the KNC architecture is challenging to achieve reasonable and consistent performance on, and we found that significant differences in performance profile can be seen between different versions of the device. Figure 10. The results on a 61 core Intel Xeon Phi Knights Corner SE10P on a 4096x4096 mesh (lower is better). The performance results clearly show that the performance on the KNC device was far more varied between the programming models, which is indicative of the challenge it posed. The natively compiled OpenMP Fortran 90 implementation of TeaLeaf represents the best possible performance achievable for all solvers, maintaining fairly consistent runtimes between the three solvers. The OpenMP 4.0 port required 45% additional runtime for the CG solver compared to the Fortran 90 implementation, but achieved performance to within 10% for both the Chebyshev and PPCG solvers. As previously discussed, we were able to improve upon our portable OpenMP 4.0 implementation by reducing the number of target regions, achieving identical performance to the Fortran OpenMP native port, but making the application non-portable. Our OpenCL implementation suffered from unusual behaviour, achieving acceptable performance for the Chebyshev and PPCG solvers, but poor performance for the CG solver at nearly 3x worse performance than the best port. We did observe that running this identical code on a different version of KNC resulted in the CG solver runtime reducing to roughly 1100s, with the other solvers maintaining the same performance. This would appear to suggest that there is a performance problem that is being caused by an issue with the architecture or software, as opposed to improper implementation. Although RAJA does not come with any automatic support for native compilation, it was straightforward to use the mmic switch to natively compile the RAJA port. The results above show that this did not lead to good performance compared to the Fortran 90 OpenMP implementation, with substantially higher runtimes required for all solvers. We know that vectorisation is very important for performance on the KNC and plan to test this with our proof-of-concept SIMD implementation in the future. As previously discussed, the Kokkos hierarchical parallelism variant (Kokkos HP), was developed by Sandia National Laboratories to overcome a performance issue with halo exclusion conditions being present in the loop body. This solution originally came about because the conditions in the body of each loop are handled particularly inefficiently when being natively compiled. The hierarchical parallelism solution re-encodes this information so that no check is required, roughly halving the solve time for the CG and PPCG solvers on the KNC. Overall, the results show that each model is able to achieve acceptable results for at least one solver with some tuning. We believe that this is enough to suggest that performance portability could be possible given more maturity and focus on the KNC architecture. 5. Even-Step Mesh Increment Analysis The previous results focus upon the mesh convergence limit because it represents the point at which the programming models are subjected to the most intensive yet realistic data load. It is also interesting to observe the behaviour that occurs at lower problem sizes, as it reveals interesting details not seen at the convergence limit. There are many distinct features of the plot in Figure 11, in particular all of the models present different runtime growth patterns. The problem sizes are shown up to 1225x1225, or $15 \times 10^5$ cells, an order of magnitude smaller than the number of cells at a mesh size of 4096x4096. Several of the programming models appear to have a very fast runtime growth rate, in particular OpenMP 4.0, OpenACC, Kokkos KNC and OpenCL KNC. This growth in runtime eventually slows and the models become more consistent with the others as the mesh convergence limit is reached. Each of those models has a high intercept on the plot and we expect that this runtime growth is indicative of large overheads in each of the models. Figure 11. A plot of the runtime as problem size is increased in even steps for all models (lower is better). There are many distinct features of the plot in Figure 11, in particular all of the models present different runtime growth patterns. The problem sizes are shown up to 1225x1225, or $15 \times 10^5$ cells, an order of magnitude smaller than the number of cells at a mesh size of 4096x4096. Several of the programming models appear to have a very fast runtime growth rate, in particular OpenMP 4.0, OpenACC, Kokkos KNC and OpenCL KNC. This growth in runtime eventually slows and the models become more consistent with the others as the mesh convergence limit is reached. Each of those models has a high intercept on the plot and we expect that this runtime growth is indicative of large overheads in each of the models. that are hidden as the amount of computation and data processing is increased. Another notable feature is that the OpenMP Fortran 90 implementation achieves the best performance up to $9 \times 10^9$ cells, but then the CPU models experience a gradual decrease in performance. This change point appears to indicate when the CPU caches have become saturated and data needs to be stored in DRAM, over time creating a memory latency and bandwidth bottleneck. It can also be seen that the GPU-targetting implementations continue to benefit from linear runtime growth, which demonstrates that they are effectively utilising the device’s data processing capabilities. For the KNC, most of the models suffer from the large overheads at these mesh sizes, but the OpenMP Fortran 90 implementation demonstrates fairly linear runtime growth similar to the GPU-targetting models. 6. Bandwidth Analysis As TeaLeaf is a memory bandwidth bound application, observing the peak bandwidth achieved on each device presents an important measure of the success of the models at taking advantage of the targeted resources. We present the average bandwidth achieved across all solvers relative to the bandwidth achieved by the STREAM benchmark on each of the target devices. ![Figure 12. The percentage of STREAM bandwidth achieved by each model averaged over all solvers (higher is better).](image) The results unequivocally show that the device-optimised implementations, OpenMP 3.0 and CUDA, achieve the best overall memory bandwidth utilisation. Aside from this, we see that most of the performance portable options fall within a 20% bandwidth reduction from this point, with several of the CPU and GPU alternatives experiencing at most a 10% memory bandwidth penalty. The Kokkos implementation performs to within 10% of the best achieved memory bandwidth for both the CPU and GPU, which is a very impressive result and clearly advocates the potential of the model. The results on the KNC are poor but the improvement seen with the hierarchical parallelism update show that better performance may be possible given some device-specific tuning or implementation maturity. The hierarchical parallelism implementation of Kokkos improved performance on KNC and maintained CPU performance, but the performance reduction of Chebyshev and PPCG solvers on the GPU mean there are some trade-offs. It would be possible to achieve better average performance by combining both solutions with some conditionality regarding the target device and solver. However, this starts to put the performance portability responsibility back in the hands of the application developer, and makes the resulting code much more complicated than maintaining a single solution throughout. 7. Related Work Herdman et al. [5] performed a similar analysis to the one presented in this research, evaluating OpenACC, OpenCL and CUDA using a sister mini-app of TeaLeaf’s, CloverLeaf. On the topic of directive-based programming models targeting accelerators, Lee et al [13] evaluated a range of models with 13 distinct benchmarks. In many cases the performance matched their hand-tuned CUDA implementations, but certain optimisations were difficult or even impossible to express with the high-level directive models. Using a pattern-based comparison, Wienke et al. [26] compared both OpenMP 4.0 and OpenACC, finding that OpenACC exposes more features overall, but that OpenMP 4.0 will likely achieve better long-term adoption because of its success on the CPU. Teodoro et al [24] evaluate the performance profiles of a KNC, GPU, and CPU with respect to Microscopy Image Analysis. They concluded that the devices had a significant variance between particular operations, exposing some preference for particular operations. 8. Future Work TeaLeaf has a specific performance profile, and it would be very useful to consider the success of each model relative to applications that have different requirements such as CloverLeaf and the SN Application Proxy (SNAP). As described throughout, many of the programming models evaluated are awaiting compiler support for particular platforms. Performance portability could be assessed on additional target hardware not investigated in this paper, or where there are novel architectural differences such as the Intel Xeon Phi Knights Landing with its high bandwidth memory. Finally, we believe that it would be useful to investigate each model’s ability to handle more complicated requirements such as heterogeneous compute or adjusting data layouts per device. 9. Conclusions We have used the mini-app TeaLeaf to test a range of parallel programming models, exposing their functional portability and performance portability on three distinct modern HPC devices. Our research has shown that among the increasing selection of parallel programming models, there is a varied range of techniques being used to exploit increasing node-level parallelism, each presenting individual levels of complexity and support for scientific application development. Given that the performance has been reasonable for all of the programming models, we expect that the future of such models will depend on their ability to improve the ease with which applications are developed. Beyond the functional portability offered, the level of complexity that a model exposes is likely to become the deciding factor as to whether a model is useful to a particular application developer. Kokkos and RAJA have been shown to be promising options for performance portability that are growing in usefulness as they mature. However, these models will still require up-front investment to migrate existing C and especially Fortran codes, and may need to expose additional complexity to achieve good performance across multiple devices. Although a port written with Kokkos today would likely have to use functors, which are quite verbose, the more mature lambda implementations possible with both Kokkos and RAJA are definitely competitive with those directive based models for ease of development. In spite of the variability issues on the CPU, OpenCL is one of the most performance portable options. Its extensive support on a range of devices not targeted by the other models, also sets OpenCL apart. The directive based programming models represent the best case in terms of development ease, and the fact that they are language-agnostic between C and Fortran is also a significant benefit. If implementations of OpenMP 4.0 become available that can match OpenACC’s performance, we predict it will be well adopted by scientific application developers as a usable interface for future-proof performance. Acknowledgments This work was funded by an EPSRC CASE studentship supported by the UK Atomic Weapons Establishment. The authors would like to extend our gratitude to David Beckingsale at Lawrence Livermore National Laboratory for his support with the RAJA port, and Christian Trott from Sandia National Laboratory for his support with the Kokkos port. We would also like to thank the University of Bristol Intel Parallel Computing Center (IPCC), High Performance Computing Group, and Advanced Computing Research Centre. In- tel for the provision of an Intel Xeon Phi, and Cray Inc., for allowing us to test on their Swan XC40 supercomputer. References
{"Source-Url": "https://research-information.bris.ac.uk/ws/portalfiles/portal/70686342/accepted_copy_tealeaf.pdf", "len_cl100k_base": 10217, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33467, "total-output-tokens": 12702, "length": "2e13", "weborganizer": {"__label__adult": 0.0005064010620117188, "__label__art_design": 0.0006842613220214844, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.0011358261108398438, "__label__entertainment": 0.00016105175018310547, "__label__fashion_beauty": 0.0002512931823730469, "__label__finance_business": 0.0003664493560791016, "__label__food_dining": 0.0004456043243408203, "__label__games": 0.0010852813720703125, "__label__hardware": 0.004108428955078125, "__label__health": 0.000988006591796875, "__label__history": 0.0006690025329589844, "__label__home_hobbies": 0.00018477439880371096, "__label__industrial": 0.0010805130004882812, "__label__literature": 0.00031304359436035156, "__label__politics": 0.00044655799865722656, "__label__religion": 0.0008540153503417969, "__label__science_tech": 0.4306640625, "__label__social_life": 0.00012624263763427734, "__label__software": 0.0085296630859375, "__label__software_dev": 0.54443359375, "__label__sports_fitness": 0.0005826950073242188, "__label__transportation": 0.001422882080078125, "__label__travel": 0.00033664703369140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56560, 0.0317]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56560, 0.19654]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56560, 0.919]], "google_gemma-3-12b-it_contains_pii": [[0, 588, false], [588, 5442, null], [5442, 9859, null], [9859, 14559, null], [14559, 19224, null], [19224, 26140, null], [26140, 32923, null], [32923, 37885, null], [37885, 43520, null], [43520, 49274, null], [49274, 56560, null]], "google_gemma-3-12b-it_is_public_document": [[0, 588, true], [588, 5442, null], [5442, 9859, null], [9859, 14559, null], [14559, 19224, null], [19224, 26140, null], [26140, 32923, null], [32923, 37885, null], [37885, 43520, null], [43520, 49274, null], [49274, 56560, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56560, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56560, null]], "pdf_page_numbers": [[0, 588, 1], [588, 5442, 2], [5442, 9859, 3], [9859, 14559, 4], [14559, 19224, 5], [19224, 26140, 6], [26140, 32923, 7], [32923, 37885, 8], [37885, 43520, 9], [43520, 49274, 10], [49274, 56560, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56560, 0.05677]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
fc514f48d64bf5537221bfde122b31e4ac252e0f
Computation Reuse in Analytics Job Service at Microsoft Alekh Jindal, Shi Qiao, Hiren Patel, Zhicheng Yin, Jieming Di, Malay Bag, Marc Friedman, Yifung Lin, Konstantinos Karanasos, Sriram Rao Microsoft {aljindal,shiqiao,hirenlijkmyin,jidi,malayb,marcfriedman,yifungl,kokaranas,srirama} @microsoft.com ABSTRACT Analytics-as-a-service, or analytics job service, is emerging as a new paradigm for data analytics, be it in a cloud environment or within enterprises. In this setting, users are not required to manage or tune their hardware and software infrastructure, and they pay only for the processing resources consumed per job. However, the shared nature of these job services across several users and teams leads to significant overlaps in partial computations, i.e., parts of the processing are duplicated across multiple jobs, thus generating redundant costs. In this paper, we describe a computation reuse framework, coined CLOUDVIEWS, which we built to address the computation overlap problem in Microsoft’s SCOPE job service. We present a detailed analysis from our production workloads to motivate the computation overlap problem and the possible gains from computation reuse. The key aspects of our system are the following: (i) we reuse computations by creating materialized views over recurring workloads, i.e., periodically executing jobs that have the same script templates but process new data each time, (ii) we select the views to materialize using a feedback loop that reconciles the compile-time and run-time statistics and gathers precise measures of the utility and cost of each overlapping computation, and (iii) we create materialized views in an online fashion, without requiring an offline phase to materialize the overlapping computations. CCS CONCEPTS • Information systems → Query optimization; • Computer systems organization → Cloud computing; KEYWORDS Materialized Views; Computation Reuse; Shared Clouds ACM Reference Format: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGMOD ’18, June 10–15, 2018, Houston, TX, USA © 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-4703-7/18/06 . . $15.00 https://doi.org/10.1145/3183713.3190656 1 INTRODUCTION 1.1 Background There is a recent trend of offering analytics-as-a-service, also referred to simply as job service, by major cloud providers. Examples include Google’s BigQuery [15], Amazon’s Athena [3], and Microsoft’s Azure Data Lake [5]. Similar job services are employed for the internal needs of large enterprises [11, 49]. These services are motivated by the fact that setting up and running data analytics is a major hurdle for enterprises. Although platform as a service (PaaS), software as a service (SaaS), and more recently database as a service (DBaaS) [4, 6] have eased the pain of provisioning and scaling hardware and software infrastructures, users are still responsible for managing and tuning their servers. A job service mitigates this pain by offering server-less analytics capability that does not require users to provision and manage servers. Instead, the service provider takes care of managing and tuning a query engine that can scale instantly and on demand. Users can get started quickly using the familiar SQL interface and pay only for the processing used for each query, in contrast to paying for the entire provisioned server infrastructure irrespective of the compute resources actually used. 1.2 Problem Given the above shift from provisioned resources to actually consumed resources, enterprises naturally do not want to duplicate their resource consumption and pay redundant costs. However, this is a major challenge in modern enterprise data analytics which consists of complex data pipelines written by several users, where parts of the computations end up running over and over again. Such computation overlap not only adds to the cost, but it is also really hard for the developers or even the administrators to detect these overlaps across different scripts and different users. To illustrate the problem, consider SCOPE [11, 52], which is the equivalent of Azure Data Lake for internal data analytics at Microsoft. SCOPE is deployed over hundreds of thousands of machines, running hundreds of thousands of production analytic jobs per day. that are written by thousands of developers, processing several exabytes of data per day, and involving several hundred petabytes of I/O. Almost 40% of the daily SCOPE jobs have computation overlap with one or more other jobs. Likewise, there are millions of overlapping subgraphs that appear at least twice. These overlaps are incurred by 70% of the total user entities (humans and machines) on these clusters. Figure 1 shows the cluster-wise computation overlap in five of our clusters. We can see that all clusters, except cluster 3, have more than 45% of their jobs overlapping. Likewise, more than 65% of users on all clusters end up having computation overlap in their jobs and the percentage of subgraphs appearing at least twice could be as high as 80%. While the ideal solution would be for the users to modularize their code and reuse the shared set of scripts and intermediate data, this is not possible in practice as users are distributed across teams, job functions, as well as geographic locations. Thus, we need an automatic cloud-scale approach to computation reuse in a job service. 1.3 Challenges There is a rich literature for materializing views [19, 20, 22, 30, 33, 44, 46, 53] and for reusing intermediate output [10, 12, 18, 23, 36–38, 50]. However, there are a number of new challenges in building a computation reuse framework for the SCOPE job service. First, enterprise data analytics often consists of recurring jobs over changing data. The SCOPE job service has more than 60% of the jobs in its key clusters as recurring [25]. With recurring jobs, scheduling and carefully materializing views over the new data is crucial, which was not an issue in traditional view selection. Incremental maintenance would not work because data might be completely new. SCOPE jobs are further packed in tight data pipelines, i.e., multiple jobs operate in a given time interval with strict completion deadlines. Tight data pipelines leave little room to analyze the recurring workload over the new data in each occurrence. Second, we need a feedback loop to analyze the previously executed workload and detect overlapping computations. Given the large volume of overlaps, materializing all of them for reuse is simply not possible. Typical methods to select the interesting overlaps (or views) depend on the utility and cost of each overlap, i.e., the runtime savings and the storage cost of each overlap. Unfortunately, however, the optimizer estimates for utility and costs are often way off due to a variety of factors (unstructured data, inaccurate operator selectivities, presence of user code, etc.) [17, 29, 31]. Thus, the feedback loop needs to reconcile the logical query trees with the actual runtime statistics to get more precise measures of utility and cost of each overlap. Third, a job service is always online and there is no offline phase available to create the materialized views, which is expected with traditional materialized views. Halting or delaying recurring jobs to create materialized views is not an option, as it carries the risk of not meeting the completion deadlines and affecting downstream data dependency. Thus, we need to create materialized views just in time and with minimal overheads. This is further challenging because multiple jobs can now compete to build views (build-build interaction), and they depend on each other for the availability of views (build-consume interaction). Finally, we need an end-to-end system for computation reuse that has a number of requirements, including automatic reuse and transparency to the end users, that are inspired from our production environments. 1.4 Contributions In this paper, we describe why and how we built an end-to-end system for automatically detecting and reusing overlapping computations in the SCOPE job service at Microsoft. Our goal is to allow users to write their jobs just as before, i.e., with zero changes to user scripts, and to automatically detect and reuse computations wherever possible. We focus on exact job subgraph matches, given that exact matches are plentiful and it makes the problem much simpler without getting into view containment complexities. Although we present our ideas and findings in the context of the SCOPE job service, we believe that they are equally applicable to other job services. Our core contributions are as follows. First, we present a detailed analysis of the computation reuse opportunity in our production clusters to get a sense of the magnitude of the problem and the expected gains. Our analysis reveals that computation overlap is a major problem across almost all business units at Microsoft, with significant runtime improvements to be expected with relatively low storage costs. We also note that the overlaps often occur at shuffle boundaries, thereby suggesting that the physical design of the materialized view is important (Section 2). Then, we discuss enabling computation reuse over recurring jobs. The key idea is to use a combination of normalized and precise hashes (called signatures) for computation subgraphs. The normalized signature matches computations across recurring instances, while the precise signature matches computations within a recurring instance. Together these two signatures enable us to analyze our workload once and reuse overlapping computations over and over again (Section 3). We provide an overview of our CLOUDVIEWS system, an end-to-end system for computation reuse in a job service, along with our key requirements and the intuition behind our approach. The CLOUDVIEWS system consists of an offline CLOUDVIEWS analyzer and an online CLOUDVIEWS runtime. To the best of our knowledge, this is the first work to present an industrial strength computation reuse framework for big data analytics (Section 4). We describe the CLOUDVIEWS analyzer for establishing a feedback loop to select the most interesting subgraphs to materialize and reuse. The CLOUDVIEWS analyzer captures the set of interesting computations to reuse based on their prior runs, plugs in custom view selection methods to select the view to materialize given a set of constraints, picks the physical design for the materialized views, and also determines the expiry of each of the materialized views. We further describe the admin interface to trigger the CLOUDVIEWS analyzer (Section 5). We describe the CLOUDVIEWS runtime which handles our online setting for computation reuse. Key components of the runtime include a metadata service for fetching the metadata of computations relevant for reuse in a given job, an online view materialization mechanism as part of the job execution, a synchronization mechanism to avoid materializing the same view in parallel, making materialized views available early during runtime, automatic query rewriting using materialized views, and job coordination hints to maximize the computation reuse (Section 6). We further analyze one of the largest business units, in terms of the number of jobs, in the above cluster. Figure 3 shows the overlapping computations from all VCs in this business unit. Note that business unit is a meaningful granularity because VCs within a business unit compose a data pipeline, with some VCs cooking the data (producers) and some VCs processing the downstream data (consumers). Figures 3(a)–3(d) show the cumulative distributions of per-job, per-input, per-user, and per-VC overlaps. Surprisingly, we see that most of the jobs have 10s to 100s of subgraphs that overlap with one or more other jobs. This suggests that there are significant opportunities to improve the data pipelines in order to reduce the redundancy. Apart from reusing computations, one could also consider sharing computations in the first place. We make a similar observation from per-input overlap distribution, where we see that more than 90% of the inputs are consumed in the same subgraphs at least twice, 40% are consumed at least five times, and 25% are consumed at least ten times. In terms of users, we again see 10s to 100s of overlaps per user, with top 10% having more than 1500 overlaps. These heavy hitters could be consulted separately. Lastly, for VCs, we see at least three groups having similar number of overlaps. Overall, computation overlap is widespread across jobs, inputs, users, and VCs, and it needs to be addressed in a systematic manner. 2.3 Operator-wise Overlap We now analyze the operator-wise overlap, i.e., the root operator of the overlapping computation subgraph. Figure 4(a) shows the operator distribution for the overlaps shown in Figure 3. We can see that sort and exchange (shuffle) constitute the top two most overlapping computations. This is interesting because these two are typically the most expensive operations as well and so it would make sense to reuse these. In contrast, the next three most overlapping operators, namely Range (scan), ComputeScalar, and RestrRemap (usually column remapping), are expected to be much cheaper to re-evaluate, since they are closer to the leaf-level in a query tree. Among other operators of interest, we see group-by aggregate, joins, and user defined functions (including process, reduce, and even extractors) having significant overlaps. Figures 4(b)–4(d) show the cumulative overlap distribution for three of the operators, namely shuffle, filter, and user-defined processor. Even though we show the shuffle operator to be more overlapping, only a small fraction of the shuffles have high frequency. This changes for the filter operator, where the cumulative distribution grows more flat, meaning that more number of filters have higher frequency. Finally, for user defined operators in Figure 4(d), the curve is more flatter. This is because user defined operators are likely to be shared as libraries by several users and teams. 2.4 Impact of Overlap In the previous section, we saw the computation overlaps in different clusters, VCs, and operators. We study the impact of these overlaps along several dimensions. Figures 5(a)–5(d) show the cumulative distributions of frequency, runtime, output size, and relative costs (i.e., view-to-query cost ratio) of the overlapping computations in one of our largest business units (same as from Section 2.2). In terms of frequency, there are close to a million computations appearing at least twice, with tens of thousands appearing at least 10 times, few hundreds appearing at least 100 times, and some appearing at least 1000 times — all in one single day! The average ![Figure 2: Overlap in one of the largest SCOPE clusters](image-url) Thereafter, we present an experimental evaluation of CLOUD-IEWS. We present the impact over production workloads at Microsoft, both in terms of latency and CPU hours. Our results show an average and overall latency improvement of 43% and 60% respectively, as well as an average and overall CPU hour improvement of 36% and 54% respectively. We further show evaluation over the TPC-DS benchmark. Our results show 79 out of the 99 TPC-DS queries having improvements with CLOUD-IEWS, with an overall runtime improvement of 17%. We also discuss the various overheads of CLOUD-IEWS, including the cost of workload analysis, the metadata lookup, and the impact on compiler runtime (Section 7). Finally, we discuss the lessons learned from the CLOUD-IEWS project. (Section 8) overlap frequency however is 4.2 (median 2, 75th percentile 3, 95th percentile 14, and 99th percentile 36). Thus, computation overlap frequencies are heavily skewed and we need to be careful in picking the views to materialize. In contrast to the frequency, the runtime and the output size distributions have much less skew. Interestingly, 26% of the overlaps have runtime of 1s or less, indicating there are opportunities to prune many of the reuse candidates, while 99% of the overlaps have runtime below 1000s. In terms of output size, 35% of the overlaps have size below 0.1MB, which is good (in case they are useful) for storage space, and 99% have size below 1TB. Lastly, view-to-query cost ratio is an interesting metric to understand the relative importance of a view to a query. We note that 46% of the overlapping computations have view-to-query cost ratio of 0.01 (1%) or less. These overlaps will not result in significant savings in latency, although their cumulative resource consumption savings may still be of interest to the customer. Overall, this is again a highly skewed distribution with only 23% of the overlaps having a view-to-query cost ratio of more than 0.1, and just 4% having a ratio of more than 0.5. 3 REUSE OVER RECURRING WORKLOADS Our goal is to materialize overlapping computations over recurring jobs in SCOPE, i.e., jobs that appear repeatedly (hourly, daily, weekly, or monthly), have template changes in each instance, and operate over new data each time. Prior works require the workload to be known a-priori in order to analyze the workload and select the views to materialize. However, with recurring jobs changing and running over new data in each instance, the exact workload is not available until the next recurring instance, e.g., the next hour. Running the workload analysis to select the views to materialize within the same recurring instance, before running the actual jobs, is simply not possible. To handle recurring jobs, we collect a combination of two signatures for each subgraph computation: one which identifies the computation precisely and one which normalizes the precise signature by the recurring changes, e.g., data/time predicates, input names, etc. These signatures are created during compilation and they are similar to plan signatures or fingerprints in prior works [1]. However, we extended the precise signature to further include the input GUIDs, any user code, as well as any external libraries used for custom code. The normalized signature ensures that we capture a normalized computation that remains the same across different recurring instances. Figure 7 shows the use of these two signatures in our approach. We analyze any recurring instance from the workload history and select frequent computations (views) based on their precise signatures (Step 1), and collect their corresponding normalized signatures into our metadata service (Step 2). This analysis needs to be run periodically, only when there is a change in the workload, thereby removing the need to run workload analysis within each recurring instance. Later during runtime, we materialize subgraphs based on their normalized signatures (Step 3), but we also record the precise signature of each of the materialized view into the physical path of the materialized files (Step 4). The precise signatures are used to match future computations for reuse (Step 5), as well as for expiring a materialized view (Step 6). In summary, the normalized signature identifies subgraphs across recurring instances (for materialization) while the precise signature matches subgraph within a recurring instance (for reuse). Together, they make computation reuse possible over recurring workloads. 4 CLOUDVIEWS OVERVIEW In this section, we give a brief overview of the CLOUDVIEWS system. Our key goals derived from our engagement with product teams are as follows: (1) Automatic: We need minimal manual intervention, since it is really hard for developers to coordinate and reuse overlapping computations amongst themselves. Thus, overlapping computations should be detected, materialized, reused, and evicted automatically. (2) Transparent: With hundreds of thousands of jobs, it is simply not possible to make changes in user scripts or their libraries. (3) Correct: Computation reuse should not introduce incorrect results, i.e., data corruption. This is especially challenging due to the presence of parameters, users code, and external libraries. (4) Latency-sensitive: SCOPE users cannot afford to slow down their data pipelines and hence computation reuse should offer better or same performance. This requires accurate estimates on the cost/benefit of materialized views, and the optimizer should still be able to discard a view in case it turns out to be too expensive. (5) Maximize reuse: The obvious goal is to do the computation reuse wherever possible. This is hard because overlapping jobs may arrive concurrently and so views materialized in one job may not end up being reused. (6) Debuggability: SCOPE has a rich debuggability experience and computation reuse should preserve that. Specifically, customers (and operations team) should be able to replay the job, see which materialized views are created or used, trace the jobs which created any of the views, and even drill down into why a view was selected for materialization or reuse in the first place. (7) Reporting: Finally, we need to report the impact of computation as well as with opportunistic resource allocation in SCOPE [8]. Traditional materialized view technologies typically have three components, an offline view selection component, an offline view building component, and an online view matching component. In our approach, we have two online components: a periodic workload analyzer to mine overlapping computations, and a runtime engine to materialize and reuse those computations. Figure 6 shows the high-level architecture of CLOUDVIEWS. The left side shows the periodic workload analyzer that is used to analyze the SCOPE workload repository. Admins can choose to include or exclude different VCs for analysis. The output of this analysis is a set of annotations telling future jobs the subgraph computations that must be materialized and reused. The right side of Figure 6 shows the runtime component of CLOUDVIEWS. Here, each incoming job can be processed in one of three ways: (i) exactly same as before in case none of the job subgraphs are materialized or are deemed to be too expensive by the optimizer, (ii) modified job graph that reads from a materialized view (i.e., there is a matching subgraph annotation and it is materialized) and reading from the materialized view is considered more efficient than recomputing it by the optimizer, and (iii) modified job graph that spoils and materializes the output of a subgraph (i.e., there is a matching subgraph annotation but it is not materialized). While the analyzer part of CLOUDVIEWS could be triggered explicitly by the user or scheduled as another recurring job, the runtime part is triggered by providing a command line flag during job submission. The job scripts of the end users remain unchanged. 5 CLOUDVIEWS ANALYZER In this section, we describe the analyzer component of CLOUDVIEWS. The key features of this component include: (i) providing feedback loop for runtime statistics, (ii) picking the physical design for the selected views to materialize, (iii) determining the expiry of a materialized view, and (iv) providing a user interface to tune and visualize the workload analysis. We describe each of these below. 5.1 The Feedback Loop Picking the right set of views to materialize is a hard problem. State-of-the-art approaches rely on what-if optimization to estimate the expected improvements if the view were to be materialized [2]. Unfortunately, the optimizer cost estimates are often way off due to the presence of complex DAGs and user code. The problem becomes even more severe in a distributed cloud setting where virtual hardware and scheduling issues make it even harder to model the actual gains in terms of job latencies. As a result, the actual improvements from a materialized view may be much lower while its actual materialization costs may be much higher than the estimated ones. Thus, we need higher confidence on which views to materialize and do not want to materialize a view which later ends up not being used, thereby wasting customer money in a job service. This gets further challenging with dynamic resource allocation within a job graph as well as with opportunistic resource allocation in SCOPE [8]. We handle the above issues by providing a feedback loop that reconciles compile-time estimates with run-time statistics, as depicted in Figure 8. Our feedback mechanism goes beyond learning from the same query, as in LEO [45], and considers arbitrary fine-grained commonalities across multiple jobs. We do this by enumerating all possible subgraphs of all jobs seen within a time window in the past, e.g., a day or a week, and finding the common subgraphs across them. Though this is more restricted than considering generalized views2, the subgraphs considered have actually been used in the past (likely to be also used in the future) and there are runtime statistics available from those previous runs (can cost them more accurately). In order to use the runtime statistics from the previous runs, we connect the job data flow (one which actually gets executed on a cluster of machines) back to the job query graph (the tree representation of the input user query). We do this by linking the operators executed at every stage in the data flow to operators in the query graph. Then, for every query subgraph, we extract corresponding runtime statistics from the data flow. These include latency (time taken to execute the subgraph), cardinality (number of output rows in the subgraph), data size (in bytes), and resource consumption (CPU, memory, parallelism, IO, etc.). In cases where several operators are pipelined in a data flow, we attribute runtime statistics such as resource consumption to individual operators, e.g., by sub-dividing the total resource consumption of all pipelined operators based on the exclusive runtime of each operator in the pipeline. Our feedback loop has several key benefits. First, there is an inevitable duplication of analysis in user scripts, due to common data preparation needed in multiple analyses or simply due to the fact that developers often start from someone else’s script before adding their own logic. With the feedback loop in our job service, users do not have to worry about de-duplicating their scripts; the system takes care of doing it automatically at runtime. Second, the runtime statistics provide more predictable measures of view materialization costs and benefits, thereby giving the customer a better idea of how much he will pay and how much he will save with this feature. Third, the feedback loop makes it more likely that the selected (and materialized) subgraphs will actually end up being used in future jobs, in contrast to picking materialized views based on cost estimates and later finding them not useful if the estimates turn out to be incorrect. Fourth, our feedback loop considers job subgraphs without considering merging two or more subgraphs, as in more general view selection. This ensures that materializing a view never requires additional computation (and hence additional money) than that would anyways be done by a job using that view. And finally, the runtime statistics observed from the subgraphs of one job get shared across all future queries having any of those subgraphs. In fact, for any new job that comes in, the system may already know 2 Queries Q1 and Q2 reading attributes (A, B) and (A, C), respectively, would generate a view (A, B, C) as candidate, even though it is neither a subgraph of Q1 nor of Q2. Although storage is cheap, the storage space used by materialized views typically is not paid much attention, i.e., views and their materialized physical design are typically not selected at the same time. However, we observed that materialized views with poor physical design end up not being used because the computation savings get over-shadowed by any additional reparationiting or sorting that the system needs to do. This happens because with massively large datasets and massively parallel processing in SCOPE, reparationiting and sorting are often the slowest steps in the job execution. CLOUDVIEWS, therefore, pays close attention to view physical design. To do so, we extract the output physical properties (partitioning type, partitioning columns, number of partitions, sort columns, sort direction) of each of the subgraph while enumerating them. The output physical properties are good hints for view physical design as they are expected by subsequent operators in the job graph. In case of no explicit physical properties at the subgraph root, we infer them from the children, i.e., we traverse down until we hit one or more physical properties. Depending on how an overlapping subgraph is used in different jobs, there may be multiple sets of physical properties for the same subgraph. The default strategy is to pick the most popular set. However, in case of no clear choice, we treat multiple physical designs (of the same view) as different views and feed them to the view selection routine. ### 5.4 Expiry and Purging Although storage is cheap, the storage space used by materialized views still needs to be reclaimed periodically. A simple heuristic is to remove all views from the previous recurring instance. However, discussions with our customers revealed that output of hourly jobs could also be used in weekly jobs or monthly jobs. Therefore, removing views after each hour/day could be wasteful. A better option is to track the lineage of the inputs of the view, i.e., for each of the view input, check the longest duration that it gets used by any of the recurring jobs. The maximum of all such durations gives a good estimate of the view expiry. Apart from using standard SCOPE scripts, this type of lineage tracking could also be facilitated using provenance tools such as Grok [43] and Guider [35], or Good’s [21]. The view expiry thus obtained is encoded into the physical files, and our Storage Manager takes care of purging the file once it expires. Cluster admins could also reclaim a given storage space by running the same view selection routines as described in Section 5.2 but replacing the max objective function with a min, i.e., picking the views with minimum utility. In the worst case, the materialized view files can be simply erased from the cluster. Both of the above operators, however, require cleaning the views from the metadata service first before deleting any of the physical files (to ensure that jobs consuming any of those inputs do not fail). ### 5.5 User Interfaces CLOUDVIEWS provides a few ways to interact with the workload analyzer. First, there is a command line interface to run the analyzer over user specific cluster, VCs and time ranges. Users can also provide their custom constraints, e.g., storage costs, latency, CPU hours, or frequency, to filter down the overlapping computations. Then, there is a Power BI [39] dashboard to look at various summaries from computation overlap analysis, as well as drill down into the top-100 most overlapping computations in more detail. Together, the goal is to help users understand the computation overlap in their workloads and to tailor computation reuse for their needs. ### 6 CLOUDVIEWS RUNTIME In this section, we describe the various components that make computation reuse possible during query processing. We collectively refer to them as the CLOUDVIEWS runtime, which consists of: (i) a metadata service to query the relevant overlaps in each incoming job, (ii) an online view materialization capability to materialize views as part of query processing, (iii) a synchronization mechanism to prevent concurrent jobs materializing the same view, (iv) an early materialization technique to publish a materialized view even before the job producing it completes, (v) automatic query rewriting to use materialized views wherever possible, and (vi) hints to the job scheduler in order to maximize the computation reuse. #### 6.1 Metadata Service The goal of the metadata service is to provide the lookup for overlapping computations and to coordinate the materialization and reuse of those computations. Recall that we have an online setting, i.e., data batches and jobs arrive continuously, and hence view materialization and reuse is a dynamic activity. Therefore, instead of simply looking up the views in the compiler, multiple SCOPE components interact with the metadata service at runtime, as illustrated in Figure 9. First, the compiler asks the metadata service for overlapping computations (views) for a given job J (Step 1). The naive approach would be for the compiler to lookup each subgraph individually to check whether or not this is an overlapping computation. However, the number of lookup requests can explode since SCOPE job graphs can be quite large, thereby leading to higher compilation overhead. as well as higher throughput requirements from the metadata service. Instead, we make one request per-job and fetch all overlaps that could be relevant for that job. This is done by creating an inverted index as follows. For each overlapping computation instance, we extract tags from its corresponding job metadata. We normalize the tags for recurring jobs and create an inverted index on the tags to point to the corresponding normalized signatures. The metadata service returns the list of normalized signatures relevant to J to the compiler (Step 2). The signatures returned by the metadata service may contain false positives, and the optimizer still needs to match them with the actual signatures in the query tree. Second, when the optimizer tries to materialize an overlapping computation, it proposes the materialization to the metadata service (Step 3). The metadata service tries to create an exclusive lock to materialize this view. Due to large number of concurrently running jobs, the same view could be already materialized by another job, i.e., the lock already exists. In this case, the service returns a failure message, otherwise, it returns success (Step 4). Note that we mine the average runtime of the view subgraph from the past occurrences, and use that to set the expiry of the exclusive lock. Once the exclusive lock expires, and if the view is still not materialized, another job could try to create the same materialized view. This gives us a fault-tolerant behavior for view materialization. Finally, the job manager reports the successful materialization of a view to the metadata service (Step 5) and the service acknowledges the lock release (Step 6). The metadata service now makes the materialized view available for other jobs to reuse, i.e., it may appear the next time the compiler asks for relevant views for a job (Step 1). We deployed our metadata service using AzureSQL as the back-end store. The metadata service periodically polls for the output of CloudViews analyzer and loads the set of selected overlapping computations whenever new analysis is available. We purge expired computations at regular intervals. ### 6.2 Online Materialization Traditional materialized views require an offline process where the database administrator is responsible to first create all relevant materialized views, i.e., the preprocessing step, before the database becomes available for running the query workload. This is not possible with recurring jobs which run in tight data pipelines with strict completion deadlines, where there is little room to do the preprocessing for creating the materialized views. Preprocessing blocks the recurring jobs, thereby causing them to miss their completion deadlines. Recurring jobs also have data dependency between them, i.e., Figure 9: CloudViews metadata service interactions with different SCOPE components. Figure 10: Illustrating online materialization and query rewriting mechanisms in the SCOPE query optimizer. would ever be used. Third, we do not need to coordinate between the query which materializes the view (as part of its execution), and the queries which reuse that materialized view; in case of multiple queries arriving at the same time, the one which finishes first materializes the view. Fourth, in case there is a change in query workload starting from a given recurring instance, then the view materialization based on the previous workload analysis stops automatically as the signatures do not match anymore. This avoids paying for and consuming resources for redundant views that are not going to be used after all. This also indicates that it is time to rerun the workload analysis. Finally, our approach does not affect any of the user infrastructure in their analytics stack. This means that the user scripts, data pipelines, query submission, job scheduling, all remain intact as before. For traditional users with enough room for upfront view materialization, e.g., weekly analytics, CLOUDVIEWS still provides an offline view materialization mode. In this mode, the optimizer extracts the matching overlapping computation subgraph while excluding any remaining operation in the job. The resulting plan materializes only the views and could be executed offline, i.e., before running the actual workload. The offline mode can be configured at the VC level in the metadata service, and later the annotations passed to the optimizer are marked either online or offline depending on the metadata service configuration. 6.3 Query Rewriting To rewrite queries using materialized views, we added an additional task in the Volcano style plan search [16]. This additional task, as shown in the upper half of Figure 10, matches the normalized signatures retrieved from the metadata service with the normalized signatures of each of the query subgraphs in a top-down fashion, i.e., we match the largest materialized views first. In case of a match, the optimizer matches the precise signature as well. Only if the precise signature matches then the materialized view could be reused. In such a scenario, the optimizer adds an alternate subexpression plan which reads from the materialized view. We do not limit the number of materialized views that could be used to answer a query. Once all applicable materialized views have been added as alternate subexpressions, the optimizer picks the best plan based on the cost estimates, i.e., one or more materialized views may end up not being used if their read costs are too high. The plan that reads from the materialized view also loads the actual statistics (for that sub-computation) and propagates those statistics up the query tree. This gives more confidence in deciding whether the plan using the materialized view is actually a good one or not. Overall, we provide fully automatic query rewriting using views, with zero changes to user scripts. 6.4 Synchronization We have two goals in terms of synchronization: (i) build-build synchronization, i.e., not having multiple jobs materialize the same view, and (ii) build-use synchronization, i.e., reuse a computation as soon as it is materialized. We handle the build-build synchronization by trying to reuse computations before trying to materialize them, as described in Section 6.2. For concurrent jobs, we also create exclusive locks via the metadata service, as described in Section 6.1. Given that the service is backed by AzureSQL, it provides consistent locking, and only a single job can actually materialize a view at a time. To handle the build-use synchronization, we modified the SCOPE job manager to publish the materialized view as soon as it is available. This means that the materialized view output is available even before the job that produces it finishes. We refer to this as early materialization. Early materialization is a semantic change as it breaks the atomicity of SCOPE jobs, however, it is very useful because the views could be a much smaller subgraph of the overall job graph. Furthermore, the materialized view is not a user output, but is rather treated as a system output, and therefore we do not affect the user contract. Finally, early materialization also helps in case of jobs failures, since the job can restart from the materialized view now, i.e., early materialization acts as a checkpoint. 6.5 Job Coordination The perfect scenario for computation reuse is when one of the jobs with overlapping computation is scheduled before others, so that the view could be computed exactly once and reused by all others. However, in reality, multiple jobs containing the same overlapping computation could be scheduled concurrently. In this case, they will recompute the same subgraph and even attempt to materialize it (though only one will prevail). We mitigate this problem by reordering recurring jobs in the client job submission systems. To do this, in addition to selecting the interesting computations to materialize, the CLOUDVIEWS analyzer also provides the submission order of the recurring jobs, that contain those computations, which will give the maximum benefit. We do this by grouping jobs having the same number of overlaps (job with multiple computations can appear in multiple groups), and picking the shortest job in terms of runtime, or least overlapping job in case of a tie, from each group. The deduplicated list of above jobs will create the materialized views that could be used by all others, and so we propose to run them first (ordered by their runtime and breaking ties using the number of overlaps). Such an ordering can be enforced using the SCOPE client-side job submission tools. Future work will look into how view-awareness could be handled centrally by the job scheduler itself. 7 EVALUATION In this section, we present an experimental evaluation of CLOUDVIEWS. We break down our evaluation into three parts, answering each of the following questions: (i) what is the impact on performance over production jobs at Microsoft? (ii) what is the impact on traditional TPC-DS benchmark? and (iii) what are the overheads involved in CLOUDVIEWS? Below we address each of these. 7.1 Impact on Production Jobs We first present performance evaluation results from our production clusters. Given the capacity constraints and costs involved in running experiments on these clusters, we carefully picked a small set of job workload for our evaluation, as described below. Workload. We ran CLOUDVIEWS analyzer over one day worth of jobs from one of the largest business units at Microsoft. We narrowed down the overlapping computations to those appearing at least thrice. There are multiple client-side tools developed and maintained by different business units at Microsoft to create workflows on top of our job service. whose cost is at least 20% of the overall job cost, and considering at most one overlapping computation per job. From the resulting set of overlapping computations, we picked the top-3 computations (views) based on their total utility, i.e., frequency times the average runtime of the overlapping computations. For each of these computations, we looked up the jobs relevant to those computations to construct our workload, consisting of a total of 32 jobs — 16, 12, and 4 jobs respectively for the three overlapping computations. **Setup.** We ran the above jobs in a pre-production environment and over production data, but with output redirection as described in [1]. For each view, we ran the relevant jobs in a sequence, in the same order as they arrived in the past workload. The first job in the sequence materializes the overlapping computation, while the remaining jobs reuse it. We executed each job twice, once with and once without CLOUDVIEWS enabled. In order to make the two runs comparable, we used the same number of machine instances and disabled opportunistic scheduling [8]. We also validated the outputs of the two runs to ensure that there is no data corruption. **Results.** Figure 11 shows the end-to-end job latency. While there is latency improvement in all but three jobs that create the materialized view, the actual improvements vary (maximum of 91% speedup and 48% slowdown, and average speedup of 43%). This is due to a number of factors, including: (i) materialized view read costs could be significant and variable based on the parallelism used at runtime, (ii) accurate estimates are propagated only in the subexpression that uses a view and the estimates are still way off in other cases (often over-estimated to avoid failures in big data systems), (iii) there could be additional partitioning or sorting applied by the optimizer to satisfy the required physical properties of the parent subexpressions (very hard to get the best view physical design for every query), and (iv) latency improvements depend on the degree to which the overlap is on the critical path. Still, the overall workload sees a total latency improvement of 60% and even though we evaluated some of the most overlapping jobs for this particular customer, it demonstrates the effectiveness of CLOUDVIEWS in speeding up analytical jobs in our job service. Figure 12 shows the resource consumption in terms of the total CPU-hours for each of the jobs in our workload. Similar to latency, CPU-hour improvements are also variable (maximum of 95% speedup, minimum of 230% slowdown, and average speedup of 36%). In particular, increased parallelism to read and write the materialized view, which could be often large, affects the overall resource consumption. Overall, however, there is a 54% drop in CPU time for the entire workload. Again, this is quite significant for reducing operational costs in our clusters. ### 7.2 TPC-DS Experiments We now present results from TPC-DS benchmark [48]. Even though TPC-DS does not really model the recurring workloads with producer/consumer behavior that we have in SCOPE, it is still helpful to evaluate our system on a more widely used benchmark. **Workload.** We generated 17B of TPC-DS dataset and considered all of the 99 queries in the benchmark. We ran all TPC-DS queries once without using CLOUDVIEWS. Then, we ran the CLOUDVIEWS analyzer to detect and select top-10 overlapping computations, similar to what was described in Section 7.1. Note that this is a very conservative selection of overlapping computations, and much higher gains could be realized by using more sophisticated view selection methods proposed in the literature [33]. **Setup.** We ran the above workload in a test environment using the CLOUDVIEWS runtime. We use our job coordination hints to run one of the jobs containing an overlap first (to create the materialized view) and the other jobs containing the same overlap after that (to use the materialized view). We ran each query with 100 machine instances and disabled opportunistic scheduling [8] in order to make the performance comparable. **Results.** Figure 13 shows the runtime improvements with CLOUDVIEWS in each of the TPC-DS queries. We can see that even with our conservative selection of overlapping computations, most of the queries (79 out of 99) see an improvement in performance. Both the peak improvement as well as the peak slowdown is close to 62%. Overall, the average runtime improves by 12.5%, while the total workload runtime improves by 17%. These would translate to significant cost savings in a job service where users pay for the resources used, which is proportional to the runtime, per query. ### 7.3 Overheads Finally, we discuss some of the overheads associated with CLOUDVIEWS. First, we have an overhead of running the CLOUDVIEWS analyzer. A typical run to analyze all jobs (several tens of thousands) in a cluster takes a couple of hours. However, since we analyze the recurring templates, we only need to analyze once in a while when there are changes in workload. We detect changes in workload by monitoring changes in the number of materialized views created over time. Then, there is compile time overhead to lookup the metadata service and to do additional work during query optimization. We measured the latency added due to metadata service lookup and it turned out to be 19ms on average with a single thread and 14.3ms on average when using 5 threads in the metadata service. This is reasonable given that the overall compilation time for TPC-DS queries was in the range of 1-2 minutes. Likewise, we measured the query optimization overhead with CLOUDVIEWS over TPC-DS queries. Interestingly, while the optimization time increased by 28% on average when creating a materialized view, the optimization time decreased by 17% on average when using the view. This is because the query tree becomes smaller when using the view and so any follow-up optimizations become faster. 8 LESSONS LEARNED In this section, we outline experiences from deploying CLOUDVIEWS to our production clusters. The CLOUDVIEWS analyzer is available as an offline tool for VC admins, while the CLOUDVIEWS runtime ships with the most recent SCOPE release. The technology is currently in preview and available to our customers in an opt-in mode, i.e., each VC admin can enable CLOUDVIEWS either for the entire VC or for certain jobs in that VC. Eventually, the goal is to make CLOUDVIEWS opt-out, i.e., overlapping computations are reused wherever possible, but customers can explicitly turn it off in special cases, e.g., SLA sensitive jobs. Below we summarize the key lessons learned from the CLOUDVIEWS project. Discovering hidden redundancies. Data analytics jobs have hidden redundancies across users (or sometimes even for the same user), and it is really hard to detect and mitigate these redundancies manually at scale. Most of the customers we talked to already expected to have computation overlaps in their workloads, and it was interesting for them to see the exact jobs and the overlapping computations present in them. While some of the customers were willing to take the pain of manually modifying their scripts to prevent overlaps, most preferred to use our automatic reuse approach instead. Improving data sharing across VCs. SCOPE workloads are typically organized as data pipelines, with dependencies across VCs that are fulfilled via explicit data materialization. With CLOUDVIEWS, we could help customers detect the most efficient of these materializations, better than those from the manual best effort and that could speedup downstream processing. This is an interesting side-effect of CLOUDVIEWS and would be a subject for future work. Extracting static computations. In many cases, we saw that there were overlapping computations even across multiple recurring instances of the same job, i.e., even with different inputs. This was because portions of the job were unchanged across multiple instances, i.e., the inputs to those portions were still the same while other portions of the job had different inputs. CLOUDVIEWS was therefore effective in detecting such static computations across multiple job instances. Reusing existing outputs. In several other cases, a subgraph rooted at an output operator was common across jobs. This means that multiple jobs were producing the same output without ever realizing it. CLOUDVIEWS was helpful to consolidate such redundant outputs by materializing the common computation once and reusing it wherever possible; we separately asked the owners of those jobs to remove the redundant output statements in their jobs. Discarding redundant jobs. In multiple cases, entire jobs were detected as overlapping. This was because of two reasons: (i) given that jobs are recurring, some of the jobs end up scheduled more frequently than the new data arrival, and (ii) there were rare cases of plain redundancy where multiple users unknowingly submit the same job. CLOUDVIEWS helped in detecting such redundancies. Utility of view physical design. Our workloads had explicit data dependencies across jobs, but the users had little idea on how to set the physical designs of the output from one job that needs to be consumed by another job. With CLOUDVIEWS, we not only capture many of these dependencies across jobs, but also pick the best physical designs for those dependency outputs. Better reliability. By materializing the shared computations across jobs, CLOUDVIEWS not only provides better performance, but it also reduces the failure rates as fewer tasks are scheduled in subsequent jobs hitting the same overlapping computation. Thus, view materialization acts as a checkpoint providing better reliability. This is further useful when the first job that hits an overlapping computation fails, since the overlapping portion may be already materialized due to early materialization in CLOUDVIEWS runtime. Better cost estimates. As mentioned before, view materialization improves the cost estimates since we can collect exact statistics from the materialized output. Given that we materialize computations that are frequent as well as expensive, better estimates over those computations are even more significant. User expectations. It was important to manage the user expectations in CLOUDVIEWS project. This includes VC admin expectations to see the cost of overlaps in their workload and expected gains with CLOUDVIEWS, end-user expectations to know whats going on in their job and reacting accordingly, and the operational support team expectations to be able to reproduce and debug the jobs submitted with CLOUDVIEWS enabled. Updates & privacy regulations. Finally, any updates in the input data results in a different precise signature, thus automatically invaliding any older materialized view for reuse. This is crucial for privacy reasons when the customers explicitly request to stop using their personal data, as provisioned in the new EU GDPR [13]. 9 RELATED WORK Traditional materialized views. Selecting views to materialize has been a long standing topic of research in databases. Given a set of queries, view selection deals with the problem of selecting a set of views to materialize to minimize some cost function (such as query evaluation and/or view maintenance cost) under some constraints (e.g., space budget) [33]. Several approaches have been proposed, especially in the context of data warehouses [19, 46] and data cubes [22]. These include modeling the problem as a state optimization problem and using search algorithm to find the most appropriate view set [46], using AND/OR to model the alternatives in a single DAG [19], or using a lattice to model data cube operations [22]. MQO [42] is similar to view selection, with the difference that views are typically only transiently materialized for the execution of a given query set. [41] describes how to incorporate MQO with a Volcano-style optimizer. It uses an AND/OR DAG and proposes heuristic algorithms for choosing intermediate results to materialize (with no space budget). Recycling intermediate results has also been proposed in the context of MonetDB [23] and pipelined query evaluation [36]. Views are more generic than subexpressions, as they can consider computation that does not appear in the logical query plan. This increases the space of possible solutions, and complicates query containment and answering queries using views [20]. Subexpression selection has also been considered in SQL Server [53]. Other related works have looked at common subexpressions within the same job script [44]. All of the above works have focussed on traditional databases with few tens to hundreds of queries. In contrast, the SCOPE job service processes tens of thousands of jobs per cluster per day. Thus, scalability is a major concern in our setting. In this paper, we described a system that can create and reuse materialized views at our scale. In a companion work, we looked at scalable view selection for our workload size [24]. Computation reuse in big data platforms. Reusing computation has received particular attention in big data platforms, since (i) there is a lot of recurring computation, (ii) optimization time is relatively short compared to the execution time of the jobs, and (iii) performance and resource benefits can be significant. ReStore [12], for instance, considers the caching of map-reduce job outputs, given a space budget. Others have looked at history-aware query optimization with materialized intermediate views [38] and at allocating the cache fairly amongst multiple cloud tenants [28]. Still others have looked at multi-query optimization in the context of map-reduce [37, 50]. PigReuse [10] addresses MQO for Pig scripts. It creates an AND/OR graph using the nested algebra representation of the Pig jobs, and then uses an ILP solver to select the least costly plan that can answer all queries. Most of these works consider sharing opportunities only for map and reduce operators, and hence their applicability is limited. Nectar [18] considers caching intermediate result in a more generalized DAG of operators. It uses heuristics, based on lookup frequency and the runtime/size of the intermediate results, to decide on the cache insertion. Still, an intermediate result is typically the output of an operator pipeline (i.e., consisting of multiple operators), without considering the outputs of all possible subexpressions. Finally, Kodiak [30] applies the traditional database approach of selecting and materializing views, while ensuring that queries meet their SLA and the total view storage is within a budget. Our approach is different from the above works, since we consider computation reuse over recurring jobs in a job service that is always online, i.e., there is no offline phase for view creation. Furthermore, our end-to-end system includes establishing a feedback loop to ensure that computation reuse is actually effective. Recurring and progressive query optimization. Both recurring and progressive optimization focus on the problem of inaccurate or missing statistics in query optimization, and not on reusing common subexpressions across jobs. In particular, recurring optimization (such as DB2’s LEO [45] and more recently Scope’s RoPE [1]) collects actual statistics of query subexpressions at runtime, and uses them in future executions of the same subexpression to improve statistics, and hence the quality of the optimized plan. Progressive optimization has been studied both in the traditional query optimization setting [7, 26, 34], and for big data clusters [9, 27]. These systems observe statistics at runtime and can change the query plan mid-flight in case the observed statistics are significantly different from the estimated ones. We borrow the concept of signatures from [9] to efficiently identify common subgraphs across jobs. Shared workload optimization. A lot of works have looked at building a common query plan for a set of queries to share operators, such as scans [40, 54] or joins [32]. A global optimization approach to find the overall best shared plan is presented in [14]. Work sharing has also been explored in big data systems. Examples include scan sharing in MapReduce [37], Hive [47] and Pig [51]. Unlike such approaches, we opted to keep each job separate: operator sharing in pay-as-you-go job services makes billing and accounting tedious, while it introduces artificial dependencies between jobs, which become even worse in the case of failures. 10 CONCLUSION In this paper, we presented a case for computation reuse in an analytics job service. We motivated the problem via a detailed analysis from production SCOPE workloads at Microsoft, and described the CLOUDVIEWS system for automatically reusing overlapping computations in SCOPE. The CLOUDVIEWS system addresses several novel challenges including recurring workloads, establishing a feedback loop, and an online setting. Overall, computation overlap is a problem across almost all business units at Microsoft and CLOUDVIEWS can automatically reuse computations wherever possible, resulting in significant potential cost savings.
{"Source-Url": "https://www.microsoft.com/en-us/research/uploads/prod/2018/03/cloudviews-sigmod2018.pdf", "len_cl100k_base": 12139, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 50309, "total-output-tokens": 13014, "length": "2e13", "weborganizer": {"__label__adult": 0.0003859996795654297, "__label__art_design": 0.00057220458984375, "__label__crime_law": 0.0004780292510986328, "__label__education_jobs": 0.0027751922607421875, "__label__entertainment": 0.0001875162124633789, "__label__fashion_beauty": 0.00025343894958496094, "__label__finance_business": 0.0021457672119140625, "__label__food_dining": 0.00044655799865722656, "__label__games": 0.0007042884826660156, "__label__hardware": 0.001110076904296875, "__label__health": 0.0008502006530761719, "__label__history": 0.0005879402160644531, "__label__home_hobbies": 0.00013375282287597656, "__label__industrial": 0.0009870529174804688, "__label__literature": 0.0005435943603515625, "__label__politics": 0.00041365623474121094, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.373291015625, "__label__social_life": 0.00018775463104248047, "__label__software": 0.05419921875, "__label__software_dev": 0.55810546875, "__label__sports_fitness": 0.0002803802490234375, "__label__transportation": 0.0006885528564453125, "__label__travel": 0.0002932548522949219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60877, 0.02526]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60877, 0.11631]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60877, 0.94281]], "google_gemma-3-12b-it_contains_pii": [[0, 5190, false], [5190, 12116, null], [12116, 16558, null], [16558, 18901, null], [18901, 21969, null], [21969, 28505, null], [28505, 33848, null], [33848, 36844, null], [36844, 43634, null], [43634, 48826, null], [48826, 53916, null], [53916, 60877, null], [60877, 60877, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5190, true], [5190, 12116, null], [12116, 16558, null], [16558, 18901, null], [18901, 21969, null], [21969, 28505, null], [28505, 33848, null], [33848, 36844, null], [36844, 43634, null], [43634, 48826, null], [48826, 53916, null], [53916, 60877, null], [60877, 60877, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60877, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60877, null]], "pdf_page_numbers": [[0, 5190, 1], [5190, 12116, 2], [12116, 16558, 3], [16558, 18901, 4], [18901, 21969, 5], [21969, 28505, 6], [28505, 33848, 7], [33848, 36844, 8], [36844, 43634, 9], [43634, 48826, 10], [48826, 53916, 11], [53916, 60877, 12], [60877, 60877, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60877, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
ed65e5864ea351b2f8156e91449f22714d000117
Taxonomies of regular tree algorithms Cleophas, L.G.W.A.; Hemerik, C. Published in: Proceedings of the Prague Stringology Conference 2009 (PSC'09, Prague, Czech Republic, August 31-September 2, 2009) Published: 01/01/2009 Document Version Publisher's PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication Citation for published version (APA): General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 17. Oct. 2017 Taxonomies of Regular Tree Algorithms Loek Cleophas and Kees Hemerik 1 FASTAR/Espresso Research Group, Department of Computer Science, University of Pretoria, 0002 Pretoria, Republic of South Africa, http://www.fastar.org 2 Software Engineering & Technology Group, Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands, http://www.win.tue.nl/set loek@loekcleophas.com, c.hemerik@tue.nl Abstract. Algorithms for acceptance, pattern matching and parsing of regular trees and the tree automata used in these algorithms have many applications, including instruction selection in compilers, implementation of term rewriting systems, and model checking. Many such tree algorithms and constructions for such tree automata appear in the literature, but some deficiencies existed, including: inaccessibility of theory and algorithms; difficulty of comparing algorithms due to variations in presentation style and level of formality; and lack of reference to the theory in many publications. An algorithm taxonomy is an effective means of bringing order to such a field. We report on two taxonomies of regular tree algorithms that we have constructed to deal with the deficiencies. The complete work has been presented in the PhD thesis of the first author. Keywords: tree acceptance, tree pattern matching, tree automata, algorithm taxonomies 1 Introduction We consider the field of regular tree languages for ordered, ranked trees. This field has a rich theory, with many generalizations from the field of regular string languages, and many relations between the two [9,10,12,14]. Parts of the theory have broad applicability in areas as diverse as instruction selection in compilers, implementation of term rewriting systems, and model checking. We focus on algorithmic solutions to three related problems in the field, i.e. tree acceptance, tree pattern matching and tree parsing. Many such algorithms appear in the literature, but unfortunately some deficiencies exist, including: 1. Inaccessibility of the theory and algorithms, as they are scattered over the literature and few or no (algorithm oriented) overview publications exist. 2. Difficulty of comparing the algorithms due to differences in presentation style and level of formality. A taxonomy—in a technical sense made more precise below—is an effective means of bringing order to such a subject. A taxonomy is a systematic classification of problems and solutions in a particular (algorithmic) problem domain. We have constructed two such taxonomies, one for tree acceptance algorithms and one for tree pattern matching ones. 1 An example of such a language as defined by a regular tree grammar can be found in Section 4. A few more practical deficiencies existed as well: no large and coherent collection of implementations of the algorithms existed; and for practical applications it was difficult to choose between algorithms. We therefore designed, implemented, and benchmarked a highly coherent toolkit of most of these algorithms as well. Taxonomies also form a good starting point for the construction of such algorithmic toolkits. In the past, taxonomies and/or toolkits of this kind have been constructed for e.g. sorting [3,11], garbage collection [17], string pattern matching, finite automata construction and minimization [21,22]. In this paper we focus on one of our taxonomies, and comment only briefly on the other one and on the toolkit. The complete work has been presented in the PhD thesis of the first author [9]. For more details we refer to this thesis and to recent shorter publications [5,6,18]. Section 2 gives a brief introduction to taxonomies as we consider them. In Section 3 we outline the structure of our taxonomy of algorithms for tree acceptance and briefly compare it to the one for tree pattern matching. Afterwards we focus on the one for tree acceptance. Definitions of tree and tree grammar related notions are given in Section 4. The main branches of the taxonomy for tree acceptance are discussed in Sections 5–8. Section 9 briefly discusses some other parts of the work, namely the toolkit and accompanying graphical user interface and the benchmarking experiments performed with them. We end the paper with some concluding remarks in Section 10. 2 Taxonomies In our technical sense a taxonomy is a means of ordering a set of algorithmic problems and their solutions. Each node of the taxonomy graph is a pair consisting of (a specification of) a problem and an algorithm solving the problem. For each (problem, algorithm) pair the set of essential details is determined. In general, there are two kinds of details: problem details, which restrict the problem, and algorithm details, which restrict the algorithm (e.g. by making it more deterministic). The root of the taxonomy graph contains a high-level algorithm of which the correctness is easily shown. A branch in the graph corresponds to addition of a detail in a correctness preserving way. Hence, the correctness of each algorithm follows from the details on its root path and the correctness of the root. Construction of an algorithm taxonomy is a bottom-up process. A literature survey of the problem domain is performed to gather algorithms. The algorithms are rephrased in a common presentation style and analyzed to determine their essential details. When two algorithms differ only in a few details, abstracting over those details yields a common ancestor. Repeating this abstraction process leads to the main structure of a taxonomy graph. Considering new combinations of details may lead to discovery of new algorithms. Eventually the taxonomy may be presented in a top-down manner. Several taxonomies of this kind appear in the literature. Broy and Darlington each constructed one of sorting algorithms [3,11]. Jonkers [17] constructed a taxonomy of garbage collection algorithms and also developed a general theory about algorithm taxonomies. Watson [21] applied the method to construct taxonomies for string pattern algorithms and for the construction and minimization of finite automata. Both in subject and in style our work is closest to Watson’s. 3 Overview of the Taxonomies of Regular Tree Algorithms The tree acceptance (aka language membership, membership) problem as we consider it is the following: Given a regular tree grammar and a subject tree, determine whether the tree is an element of the language defined by the grammar. Figure 1 depicts the taxonomy of algorithms we have constructed for this problem. The edge labels correspond to details, explained in Table 1. In the taxonomy graph, three main subgraphs can be distinguished. The first subgraph (detail T-ACCEPTOR and below) contains all algorithms based on the correspondence between regular tree grammars and finite tree automata. For every regular tree grammar an undirected finite tree automaton can be constructed, which accepts exactly the trees generated by the grammar. By adding more detail, viz. a direction (detail FR: frontier-to-root or detail RF: root-to-frontier) or determinacy (detail DET) more specific constructions are obtained. The acceptance algorithms from this part of the taxonomy are described in more detail in Section 5, while the tree automata constructions used in them are discussed in Section 6. The second subgraph (detail MATCH-SET and below) contains all algorithms based on suitably chosen generalizations of the relation $S \Rightarrow t$ (where $\Rightarrow$ indicates derivation in zero or more steps (see Section 4), $S$ is the start symbol of the grammar and $t$ is the subject tree). For each subtree of $t$, they compute a set of items from which $t$ may be derived, a so-called match set. Tree $t$ is accepted if and only if its match set contains $S$. The algorithms in this subgraph of the taxonomy differ in the item set used and in how the match sets are computed. This part of the taxonomy is described in more detail in Section 7. Figure 1. Tree acceptance taxonomy. Each node is labeled with its corresponding algorithm or section (S.) number in [9]. Constructions for tree acceptors used in algorithms of branch (T-ACCEPTOR) are not depicted. The bottom part of the figure shows the four possible filters that can be used for detail FILTER. Loek Cleophas and Kees Hemerik: Taxonomies of Regular Tree Algorithms <table> <thead> <tr> <th>T-ACCEPTOR</th> <th>Use a tree automaton accepting the language of a regular tree grammar to solve the language membership problem.</th> </tr> </thead> <tbody> <tr> <td>RF</td> <td>Consider the transition relations of the tree automaton used in an algorithm to be directed in a root-to-frontier or top-down direction.</td> </tr> <tr> <td>FR</td> <td>Consider the transition relations of the tree automaton used in an algorithm to be directed in a frontier-to-root or bottom-up direction.</td> </tr> <tr> <td>DET</td> <td>Use a deterministic version of an automaton.</td> </tr> <tr> <td>MATCH-SET</td> <td>Use an item set and a match set function to solve the tree acceptance/language membership problem. Such an item set is derived from the productions of the regular tree grammar and the match set function indicates from which of these items a tree is derivable.</td> </tr> <tr> <td>REC</td> <td>Compute match set values recursively, i.e. compute the match set values for a tree from the match set values computed for its direct subtrees.</td> </tr> <tr> <td>FILTER</td> <td>Use a filtering function in the computation of match set function values. Before computing the match set for a tree, such a filtering function is applied to the match sets of its direct subtrees.</td> </tr> <tr> <td>TABULATE</td> <td>Use a tabulated version of the match set function (and of the filter functions, if filtering is used), in which a bijection is used to identify match sets by integers.</td> </tr> <tr> <td>S-PATH</td> <td>Uniquely decompose production right hand sides into stringpaths. Based on matching stringpaths, production right hand sides and nonterminals deriving the subject tree can be uniquely determined and tree acceptance can thus be solved.</td> </tr> <tr> <td>SP-MATCHER</td> <td>Use an automaton as a pattern matcher for a set of stringpaths in a root-to-frontier or top-down subject tree traversal.</td> </tr> <tr> <td>ACA-SPM</td> <td>Use an (optimal) Aho-Corasick automaton as a stringpath matcher and define transition and output functions in terms of that automaton.</td> </tr> <tr> <td>DRFTA-SPM</td> <td>Use a deterministic root-to-frontier tree automaton as a stringpath matcher and define transition and output functions in terms of that automaton.</td> </tr> </tbody> </table> Table 1. The third subgraph (detail SP-MATCHER and below) contains algorithms based on the decomposition of items into so-called stringpaths and subsequent use of string matching techniques. Based on stringpath matches found, matches of items and hence essentially the match sets mentioned previously are computed for each subtree of \( t \). Section 8 gives a brief explanation of this taxonomy part. As our focus in this paper is on the tree acceptance taxonomy and the algorithms and constructions included in it, we do not formally define the tree pattern matching problem. Figure 2 shows the taxonomy of tree pattern matching algorithms. Although we do not explicitly give the meaning of the details used, it should be clear that the taxonomies for tree acceptance and tree pattern matching have much in common. Techniques such as the subset construction, match sets, and stringpaths are used in both. This is not surprising: the two problems are closely related, and some kinds of tree acceptors can be turned into tree pattern matchers (or vice versa) with little effort. The same phenomenon can be observed in acceptors and pattern matchers for string languages. 4 Notation and definitions We use \( \mathbb{B} \) and \( \mathbb{N} \) to denote the booleans and the natural numbers. We use notation \( \langle \text{Set } a : R(a) : E(a) \rangle \) for the set of expressions \( E(a) \) for which \( a \) satisfies range predicate \( R(a) \). Many of the other notations and definitions we use are related to regular tree language theory and to a large extent generalizations of familiar ones from regular string language theory. To aid readers unfamiliar with this theory, we briefly introduce the concepts needed in the rest of this paper. Readers may want to consult e.g. \([9,10,12,14]\) for more detail. Let \( \Sigma \) be an alphabet, and \( r : \Sigma \mapsto \mathbb{N} \). Pair \((\Sigma, r)\) is a ranked alphabet, \( r \) is a ranking function, and for all \( a \in \Sigma \), \( r(a) \) is called the rank or arity of \( a \). (The ranking function indicates the number of child nodes a node labeled by a particular symbol will have.) We use \( \Sigma_n \) for \( 0 \leq n \) to indicate the subset of \( \Sigma \) of symbols with arity \( n \). Given a ranked alphabet \((\Sigma, r)\), the set of ordered, ranked trees over this alphabet, set \( \text{Tr}(\Sigma, r) \), is the smallest set satisfying 1. \( \Sigma_0 \subseteq \text{Tr}(\Sigma, r) \), and 2. \( a(t_1, \ldots, t_n) \in \text{Tr}(\Sigma, r) \) for all \( t_1, \ldots, t_n \in \text{Tr}(\Sigma, r) \), \( a \in \Sigma \) such that \( r(a) = n \neq 0 \). As a running example, we assume \((\Sigma, r)\) to be \(\{(a, 2), (b, 1), (c, 0), (d, 0)\}\), i.e. consisting of symbols \(a, b, c\) and \(d\) with rank 2, 1, 0 and 0. Trees in \( \text{Tr}(\Sigma, r) \) include for example \( c, a(b(c), d) \) and \( a(a(b(c), c), d) \). A regular tree grammar (RTG) \( G \) is a 5-tuple \((N, \Sigma, r, \text{Prods}, S)\) where \( N \) and \( \Sigma \) are disjoint alphabets (the nonterminals and terminals), \((N \cup \Sigma, r)\) is a ranked alphabet in which all nonterminals have rank 0, \( \text{Prods} \subseteq N \times \text{Tr}(N \cup \Sigma, r) \) is the finite set of productions, and $S \in N$ (the start symbol). We use LHS and RHS for left hand side and right hand side (of a production), and use RHS(Prods) for the set of production RHSs. Given a grammar $G$, we use $\Rightarrow$ for a derivation step, in which a nonterminal is replaced by a corresponding production RHS. The reflexive and transitive closure of $\Rightarrow$ is denoted by $\Rightarrow^*$. The subset of $Tr(\Sigma, r)$ derivable from $S$ is denoted $L(G)$. For technical reasons, we introduce the augmented grammar $G'$ for a grammar $G$, defined by $G' = (N \cup \{S'\}, \Sigma, r \cup \{(S', 0)\}, Prods \cup \{S' \mapsto S\}, S')$ where $S'$ is a fresh symbol. In this paper, we assume an example grammar $G_1 = (N, \Sigma, r, Prods, S)$ with $N = \{S, B\}$, $r$ and $\Sigma$ as before, and with Prods defined as $\{S \mapsto a(B, d), \ S \mapsto a(b(c), B), \ S \mapsto c, \ B \mapsto b(B), \ B \mapsto d\}$. We assume $G$ to be the corresponding augmented grammar. 5 Algorithms based on Tree Automata The first subgraph of the taxonomy deals with algorithms for tree acceptance that are based on correspondences between regular tree grammars and finite tree automata. The theoretical basis for this correspondence is well-known and generalizes a similar correspondence between regular string grammars and finite string automata. To ease understanding we briefly outline how the generalization works. It is well known that the theory of regular tree languages generalizes that of regular string languages [9,10,12,14]. This is not surprising: any string $a_0 \cdots a_{n-1}$ can be seen as a special kind of regular tree, viz. one consisting of $n$ unary nodes, each labeled with a symbol $a_i$ of rank 1, closed by a nullary node marked with a symbol of rank 0. Notions from finite automata for strings can be generalized to the tree case as well, although this requires a particular view of such automata. Suppose that a particular string automaton goes through a state sequence $q_0, \ldots, q_n$ when presented the string $a_0 \cdots a_{n-1}$. This means that for each $i : 0 \leq i < n$ the pair of states $(q_i, q_{i+1})$ must be in the transition relation of symbol $a_i$. We can summarize the transition sequence by the following alternation of states and symbols: $q_0a_0 \cdots a_{n-1}q_n$. In other words, the positions in the string have been consistently annotated with states $q_0, \ldots, q_n$. The language accepted by the automaton can be defined as the set of strings that can be consistently annotated in this way, such that $q_0$ and $q_n$ are initial and final states. This view can easily be generalized to ordered, ranked trees: each node is annotated with a state, and for each node labeled with a symbol $a$ of rank $n$, the state $q_0$ assigned to that node and the states $q_1, \ldots, q_n$ of the $n$ direct subnodes should be such that the tuple $(q_0, (q_1, \ldots, q_n))$ is in the transition relation of symbol $a$. Note that this simplifies to $(q_0, (\))$ for symbols of rank 0. (Hence, taking a frontier-to-root or bottom-up view on tree automata, no equivalent for a string automaton’s initial states is needed; no equivalent for a string automaton’s final states is needed when taking a root-to-frontier or top-down view.) A tree is accepted by a finite tree automaton if and only if it can be consistently annotated such that the state assigned to the root is a so-called root accepting state. This motivates the following definition: **Definition 1.** A (finite) tree automaton (TA) $M$ is a 5-tuple $(Q, \Sigma, r, R, Q_{ra})$ such that $Q$ is a finite set, the state set; $(\Sigma, r)$ is a ranked alphabet; $R = \{R_a | a \in \Sigma\} \cup R_\varepsilon$ is the set of transition relations (where $R_a \subseteq Q \times Q$ and $R_\varepsilon \subseteq Q \times Q^n$, for all $a \in \Sigma$ with $r(a) = n$); and $Q_{ra} \subseteq Q$ is the set of root accepting states. Many important theorems carry over from regular string grammars and automata to the tree case as well. In particular: **Theorem 2.** For every regular tree grammar $G$ there exists a tree automaton $M$ such that $L(G) = L(M)$. This theorem justifies the following algorithm as a solution for tree acceptance: **Algorithm 3 (T-Acceptor)** ```plaintext \[ \begin{align*} \text{const } G &= (N', \Sigma, r', \text{Prods}', S') : \text{augmented RTG}; \\ t : Tr(\Sigma, r); \\ \text{var } b : B \\ \text{let } M = (Q, \Sigma, r, R, Q_{ra}) \text{ be a TA such that } L(M) = L(G); \\ b : = t \in L(M) \\ \{ b \equiv t \in L(G) \} \end{align*} ``` This abstract and rather trivial algorithm forms the root of the part of the taxonomy graph containing all algorithms based on tree automata. Note that it does not specify how $t \in L(M)$ is determined. It could consider all state assignments to $t$ respecting the transition relations $R$, and determine whether an accepting one exists. To obtain more specific and more practical algorithms, the automata and hence the state assignments can be considered as directed ones (detail FR: frontier-to-root aka bottom-up or detail RF: root-to-frontier aka top-down). This results in (the use of) an $\varepsilon$-nondeterministic frontier-to-root TA ($\varepsilon$NFRTA) and $\varepsilon$-nondeterministic root-to-frontier TA ($\varepsilon$NRFTA). Restricting the directed automata to the case without $\varepsilon$-transitions, we obtain the $\varepsilon$-less TA and ($\varepsilon$-less) NRFTA and NFRTA. As with string automata, $\varepsilon$-transitions can be removed by a straightforward transformation. The use of the resulting automata slightly simplifies the acceptance algorithms. ### 5.1 FR: Frontier-to-Root Tree Acceptors For ($\varepsilon$)NFRTAs, a recursive acceptance function $RSt : Tr(\Sigma, r) \rightarrow P(Q)$ can be defined. This function yields the states assigned to a tree’s root node based on those assigned to that node’s child nodes. A subject tree $t$ is then accepted if and only if at least one accepting state occurs in state set $RSt(t)$. Restricting the directed $R_{\varepsilon}$ of the ($\varepsilon$-less) NFRTA to be single-valued functions, we obtain the deterministic DFRTA. A subset construction $\text{SUBSET}_{\text{FR}}$ can be given, similar to that for string automata, to obtain a DFRTA for an ($\varepsilon$)NFRTA. The use of a DFRTA leads to the straightforward Algorithm 4 given below. Algorithm 4 (T-ACCEPTOR, FR, DET) \[ \begin{align*} \text{const } G &= (N', \Sigma, r', \text{Prods}', S') : \text{ augmented RTG;} \\ t &: \text{Tr}(\Sigma, r); \\ \text{var } b : \mathbb{B} \\ \text{let } M &= (Q, \Sigma, r, Q_{ra}) \text{ be a DFRTA such that } \mathcal{L}(M) = \mathcal{L}(G); \\ b &: = \text{Traverse}(t) \in Q_{ra} \\ \{ b \equiv t \in \mathcal{L}(G) \} \\ \text{func Traverse}(st : \text{Tr}(\Sigma, r)) : Q = \\ \| \\ \text{let } a &= st(\varepsilon); \\ \{ st = a(st_1, \ldots, st_n) \text{ where } n = r(a) \} \\ \text{Traverse} &= R_a(\text{Traverse}(st_1), \ldots, \text{Traverse}(st_n)) \\ \| \{ \text{Post: } \{ \text{Traverse} \} = RSt(st) \} \\ \| \end{align*} \] 5.2 RF: Root-to-Frontier Tree Acceptors For root-to-frontier automata, we can define a root-to-frontier acceptance function \( \text{Accept} \in \text{Tr}(\Sigma, r) \times Q \mapsto \mathbb{B} \) indicating whether an accepting computation starting from some state exists for a tree. In the resulting Algorithm (T-ACCEPTOR, RF) (not given here), the value of this function is computed by possibly many root-to-frontier subject tree traversals (starting from each of the root accepting states). As with FRTAs, RFTAs can be restricted to \( \varepsilon \)-less ones and further to deterministic ones. Since DRFTAs are known to be less powerful than other TA kinds, algorithms using DRFTAs cannot solve the acceptance problem for each input grammar. We refer the reader to [9] for more information on algorithms using RFTAs to directly solve the tree acceptance problem. In Section 8 we briefly discuss how DRFTAs can be used for so-called stringpath matching. Since there is a one-to-one correspondence between a tree and its set of stringpaths, DRFTAs can thus be used to solve the tree acceptance problem, albeit indirectly. 6 Construction of tree automata Nowhere in Section 5 did we specify how the tree automata \( M \), which are used in Algorithm (T-ACCEPTOR) and derived algorithms, are to be constructed. Such constructions can be considered separately, as we do in this section. Algorithm (T-ACCEPTOR) and derived ones use TAs \( M \) such that \( \mathcal{L}(M) = \mathcal{L}(G) \). Depending on the algorithm, the acceptor may need to be undirected or directed RF or FR, and directed ones may need to be nondeterministic or deterministic. The constructions differ in a number of aspects: - Which item set is used to construct states: one containing all subtrees of production RHSs, or one just containing all nonterminals as well as the proper subtrees among RHSs, - whether \( \varepsilon \)-transitions are present or not—the latter indicated by label REM-\( \varepsilon \), - whether automata are undirected, root-to-frontier (aka top-down) or frontier-to-root (aka bottom-up), and whether \( \varepsilon \)-less directed automata are deterministic or not. By combining choices for these aspects, twenty four constructions for tree acceptors can be obtained. Roughly half are treated in [9, Chapter 6], seeming most interesting because they occur in the literature or because they lead to ones that do. For each construction in our taxonomy, the discussion in [9, Chapter 6] defines the state set, root accepting state set and transition relation are defined; and usually gives an example and a discussion of correctness and of related constructions and literature. Presenting all of the constructions in such a similar, uniform and precise way facilitates understanding and comparing the different constructions. To further simplify understanding and comparison, the constructions are identified by sequences of detail labels. For example, the first construction, Construction (TGA-TA:ALL-SUB), is a basic construction for undirected TAs. Its state set corresponds to all subtrees of production RHSs, while its transitions encode the relations between (tuples of) such states, based on the relation between a tree and its direct subtrees and the relation between a production LHS and RHS. We cannot present the constructions here in detail, but restrict ourselves to describing them briefly and showing how constructions from the literature are included. We emphasize that our taxonomy presents all of them together and relates all of them for the first time. – The basic Construction (TGA-TA:ALL-SUB) described above does not explicitly appear in the literature, but its FR and RF versions appear in van Dinther’s 1987 work [20]. – Applying \( \text{rem-}\varepsilon \) results in Construction (TGA-TA:ALL-SUB:REM-\varepsilon) for automata isomorphic to those constructed by Ferdinand et al. (1994) [13]. This detail makes states corresponding to certain full RHSs unreachable and therefore useless. – To prevent such states from occurring, a state set containing only nonterminals and proper subtrees of RHSs can be used instead. Of the resulting Construction (TGA-TA:PROPER-N:REM-\varepsilon), • an undirected version appears in Ferdinand, Seidl and Wilhelm’s 1994 paper [13] and later in Wilhelm & Maurer [23]. Somewhat surprisingly, the construction in its general form apparently did not occur in the literature before 1994. It is well known however that every RTG can easily be transformed into one with productions of the form \( A \rightarrow a(A_1, \ldots, A_n) \) only (by introducing fresh nonterminals and productions). For such RTGs, It is also straightforward to transform any RTG into one with productions of the form given above and of the form \( A \rightarrow B \) (i.e. additionally allowing unit productions). For such RTGs, • an FR directed version of Construction (TGA-TA:PROPER-N:REM-\varepsilon) already appears in Brainerd’s 1960s work [2] and again in [20], and • an RF directed version appears in Comon et al. ’s online work [10]. – Constructions (TGA-TA:ALL-SUB:REM-\varepsilon:RF:SUBSET\text{RF}) and (TGA-TA:PROPER-N::REM-\varepsilon:RF:SUBSET\text{RF}), which are derived constructions resulting in DRFTAs, do not appear in the literature, probably due to the restricted power of such automata. For a specific subclass of RTGs for which DFRTAs can be constructed, a variant resulting in tree parsers based on such DFRTAs is presented in [20]. A construction for DFRTAs which uses all RHSs for state set construction—i.e. Construction (TGA-TA:ALL-SUB:REM-ε:FR:SUBSETFR)—appears in [15]. The encompassed subset construction constructs the reachable subsets only, with an explicit sink state for the empty set. The presentation mostly disregards the automata view and uses the recursive match set view of Section 7. It was inspired by and gives a more formal version of the initial construction presented in Chase’s 1987 paper [4]. A construction for DFRTAs which uses only nonterminals and proper subtrees of RHSs—Construction (TGA-TA:PROPER-N:REM-ε:FR:SUBSETRH)—appears in [13, Section 6] and in [23, Sections 11.6–11.7]. 7 Algorithms based on Match Sets In this section we consider the second subgraph of the taxonomy. Algorithms in this part solve the tree acceptance problem, i.e. \( S \Rightarrow^* t \), by suitably chosen generalizations of relation \( \Rightarrow \). First, from the tree grammar a set of Items is constructed, e.g. the set of subtrees of right hand sides of productions of the grammar. Then, for the subject tree \( t \), a so-called match set \( MS(t) \) is computed, the set of all \( p \in \text{Items} \) for which \( p \Rightarrow t \) holds. Tree \( t \) is accepted if and only if \( S \in MS(t) \). Algorithms in this part of the taxonomy differ in the set \( \text{Items} \) used and in how function \( MS \) is computed. The first algorithm, Algorithm (MATCH-SET), does not specify how to compute function \( MS \). Function \( MS \) can effectively be computed recursively over a subject tree, i.e. by a scheme of the form \( MS(a(t_1, \ldots, t_n)) = \mathcal{F}(MS(t_1), \ldots, MS(t_n)) \). Function \( \mathcal{F} \) composes and filters items for \( MS(a(t_1, \ldots, t_n)) \) from those in the match sets \( MS(t_1), \ldots, MS(t_n) \) computed for the \( n \) direct subtrees of \( a(t_1, \ldots, t_n) \). For symbols \( a \) of rank \( n \) and trees \( t_1, \ldots, t_n \), the value of \( \mathcal{F}(MS(t_1), \ldots, MS(t_n)) \) is defined to be \[ Cl(Comp_a(\text{Filt}_{a,1}(MS(t_1)), \ldots, \text{Filt}_{a,n}(MS(t_n)))) \] where: - The \( \text{Filt}_{a,i} \) are filter functions, filtering items from the respective match sets based e.g. on the values of \( a \) and \( i \). Filtering is based on certain elements of children’s match sets never contributing to the parent’s match set. Such a child match set element may thus be safely disregarded for the computation of the parent’s match set. Note that the identity function is among these filter functions. - The \( \text{Comp}_a \) are composition functions, which result in those subtrees of RHSs that are compositions of the subnodes’ (filtered) match set elements and the symbol \( a \). - \( Cl \) is a closure function, adding e.g. nonterminal LHSs corresponding to complete RHSs that are in the composite match set. The resulting algorithms are Algorithm (MATCH-SET, REC) (not using filter functions) and Algorithms (MATCH-SET, REC, FILTER) (with different instantiations of filter functions). As an example of recursive match set computation, assume that we want to compute \( MS(a(b(c), d)) \) and that we use the identity function as a filter function (i.e. no filtering is applied). Furthermore, assume that \( MS(b(c)) = \{b(c), b(B), B\} \) and $MS(d) = \{d, B\}$ have already been computed. Based on this, $MS(a(b(c), d))$ will contain $a(b(c), d)$ and $a(B, d)$ by composition with $a$, and $B$ and $S$ by the closure function, since $S \Rightarrow a(b(c), d)$ and $B \Rightarrow S$. No other elements are included in $MS(a(b(c), d))$. It is straightforward to show that match sets and relations between them, as computed by Algorithm (MATCH-SET, REC) with particular item sets, correspond to states and transition relations of DFRTAs obtained by particular automata constructions as in Section 6. Recursive match set computation and the use of a DFRTA as an acceptor are simply two views on one approach [9, Chapter 5]. This correspondence is indicated by the dotted line in Figure 1. To improve computation efficiency, values of $MS$ eq. the acceptance function of the DFRTA are usually tabulated to prevent recomputation. Such tabulation uses a bijection between states (elements of $P(Items)$) and integers for indexing the tables. The tabulation starts with symbols of rank 0, creating a state for each of them, and continues by computing the composition of symbols with match sets represented by existing states, for as long as new states are encountered, i.e. the computation is performed for the reachable part of state set $P(Items)$ only. Such reachability-based tabulation is essentially straightforward, but somewhat intricate for trees/n-ary relations, even more so in the presence of filtering. We therefore do not present an example here; see e.g. [9, Chapter 5] or [15] instead. In practice, the size of the RTGs used leads to large but usually sparse tables: e.g. for instruction selection, an RTG may well have hundreds of productions and lead to tables of over 100 MB. Filtering is therefore used to reduce storage space. For example, given match set $MS(b(c))$ above, $b(B)$ can be filtered, as it does not occur as a subtree of any Item in $G$. Different item categories can be filtered out (and may lead to different space savings, depending on the grammar): - Filtering trees not occurring as proper subtrees (such as $b(B)$); filter TFILT, originally by Turner [19]. - Filtering trees not occurring as the $i$th child tree of a node labeled $a$; filter CFILT, originally by Chase [4,15]. - One of two new filter functions. Our research in taxonomizing the existing algorithms and filter functions lead us to describe these new ones, which can be seen as simplifications of Chase’s filter functions yet somewhat surprisingly had not been described before: - Filtering trees based on index $i$ only, i.e. not occurring as the $i$th child tree of any node; filter IFILT. - Filtering trees based on symbol $a$ only, i.e. not occurring as a tree of a node labeled $a$ at any child position; filter SFILT. Even more surprisingly given their non-appearance in the literature, these two filters turn out to outperform Chase’s filter on both text book example RTGs and instruction selection RTGs for e.g. the Intel X86 and Sun SPARC families: the index filter results in lower memory use, while the symbol filter results in slightly faster tabulation time than with Chase’s filter. The experimental results have been described in detail in [5,9,18]. 8 Algorithms using stringpath matching The third subgraph (detail sp-matcher and below) of the taxonomy in Figure 1 contains algorithms for tree acceptance that are derived from algorithms for tree pattern matching. We only briefly sketch the main ideas. The tree pattern matchers that we use in these tree acceptors reduce tree pattern matching to string pattern matching, using a technique first described in [16]. Each tree can be fully characterized by a set of stringpaths, and a tree pattern matches at a certain position in a tree if and only if all its stringpaths do. By traversing the subject tree and using a multiple string pattern matcher (e.g. [1]), matches of stringpaths can be detected. In [8] (originally presented at this conference as [7]) and [9] we discuss such algorithms in more detail and show that a certain DRFTA construction leads to DRFTAs—i.e. deterministic RF tree automata—that are also usable for stringpath matching. With a little extra bookkeeping, a tree pattern matcher of this kind can be turned into a tree acceptor. 9 Other Parts of the Work Our work on regular tree algorithms has resulted in two taxonomies and a toolkit of algorithms. In this paper, we have mainly reported on one of the taxonomies, although it was pointed out in Section 3 how similar the tree pattern matching and tree acceptance algorithms and taxonomies are. In this section we present some remarks on the rest of the work. We refer the interested reader to [5,9] for more information. As mentioned in Section 1, taxonomies form a good starting point for the construction of highly coherent algorithm toolkits. Based on the taxonomies of tree acceptance and tree matching algorithms, such an (experimental) toolkit was developed as part of our research. The toolkit contains most of the concrete algorithms and automata constructions from the taxonomies, as well as a number of fundamental algorithms and data structures—such as alphabets, trees, regular tree grammars, simple grammar transformations—and some extensions of tree acceptance algorithms to tree parsing and rudimentary instruction selection. The design of the toolkit was guided by the two taxonomies: the hierarchy of the taxonomies determines the class and interface hierarchies of the toolkit, and the abstract algorithms lead to straightforward method implementations. The toolkit, called FOREST FIRE, is implemented in Java and accompanied by a graphical user interface (GUI) called FIRE WOOD. This GUI supports input, output, creation and manipulation of data structures from the toolkit and was used to interactively experiment with and get insight into algorithms. More details on the toolkit and GUI can be found in [5,18]. The toolkit and GUI, including source code, example input files and brief manuals, are available for non-commercial purposes via http://www.fastar.org. 10 Concluding Remarks The two taxonomies we constructed cover many algorithms and automata constructions for tree acceptance and tree pattern matching, which appeared in the literature in the past forty years. As for earlier taxonomies, their construction required a lot of time and effort to study original papers and distill the published algorithms’ essential details (more so than in usual scientific research, which is typically limited to studying one or a few existing publications and building on those). Abstraction and sequentially adding details to obtain algorithms were essential and powerful means to clearly describe the algorithms and to make their correctness more apparent. The uniform presentation in the taxonomies improves accessibility and shows algorithm relations: comparing algorithms previously presented in different styles has become easier and consultation of the original papers is often no longer necessary. The taxonomies also lead to new and rediscovered algorithms: for example, two new filters were discovered which, though conceptually simple, are practically relevant. Furthermore, Turner’s filter was more or less rediscovered. Our initial literature search, although apparently quite extensive, did not find Turner’s paper—likely because it was not referred to by any other literature in the same field. As a result, we came up with the rather basic filter independently, before eventually finding it in the literature. The uniform presentation simplified and guided the high-level design of our toolkit of regular tree algorithms, although the choice of representations for basic data structures still took some time and effort. Experiments with the toolkit provided some interesting results, including the fact that the new filters outperformed Chase’s more complex but frequently used filter in many cases. The results from our research thus are both theoretical and practical, ranging from formal definitions and algorithm taxonomies to a toolkit and experimental results. A form of symbiosis occurred between the theoretical and the practical: the taxonomies were helpful in constructing the toolkit, while the experiments with the toolkit in turn lead to a better understanding of the theoretical definitions and algorithm descriptions, thus helping to simplify the taxonomies. References 12. J. Engelfriet: *Tree Automata and Tree Grammars*, Lecture Notes DAIMI FN-10, Aarhus University, April 1975.
{"Source-Url": "https://pure.tue.nl/ws/files/3207515/Metis230296.pdf", "len_cl100k_base": 9589, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 47552, "total-output-tokens": 11710, "length": "2e13", "weborganizer": {"__label__adult": 0.0005660057067871094, "__label__art_design": 0.0005693435668945312, "__label__crime_law": 0.0005393028259277344, "__label__education_jobs": 0.0013713836669921875, "__label__entertainment": 0.0001615285873413086, "__label__fashion_beauty": 0.00031256675720214844, "__label__finance_business": 0.00028395652770996094, "__label__food_dining": 0.0005464553833007812, "__label__games": 0.0010976791381835938, "__label__hardware": 0.0013513565063476562, "__label__health": 0.00130462646484375, "__label__history": 0.0005159378051757812, "__label__home_hobbies": 0.0001550912857055664, "__label__industrial": 0.0006222724914550781, "__label__literature": 0.0009899139404296875, "__label__politics": 0.000461578369140625, "__label__religion": 0.0009908676147460938, "__label__science_tech": 0.11767578125, "__label__social_life": 0.00016355514526367188, "__label__software": 0.00597381591796875, "__label__software_dev": 0.86279296875, "__label__sports_fitness": 0.0005087852478027344, "__label__transportation": 0.0009179115295410156, "__label__travel": 0.00026488304138183594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44328, 0.0202]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44328, 0.48116]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44328, 0.86826]], "google_gemma-3-12b-it_contains_pii": [[0, 2155, false], [2155, 5028, null], [5028, 8477, null], [8477, 10595, null], [10595, 14003, null], [14003, 16082, null], [16082, 20026, null], [20026, 22512, null], [22512, 25208, null], [25208, 28664, null], [28664, 32138, null], [32138, 35373, null], [35373, 38688, null], [38688, 42289, null], [42289, 44328, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2155, true], [2155, 5028, null], [5028, 8477, null], [8477, 10595, null], [10595, 14003, null], [14003, 16082, null], [16082, 20026, null], [20026, 22512, null], [22512, 25208, null], [25208, 28664, null], [28664, 32138, null], [32138, 35373, null], [35373, 38688, null], [38688, 42289, null], [42289, 44328, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44328, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44328, null]], "pdf_page_numbers": [[0, 2155, 1], [2155, 5028, 2], [5028, 8477, 3], [8477, 10595, 4], [10595, 14003, 5], [14003, 16082, 6], [16082, 20026, 7], [20026, 22512, 8], [22512, 25208, 9], [25208, 28664, 10], [28664, 32138, 11], [32138, 35373, 12], [35373, 38688, 13], [38688, 42289, 14], [42289, 44328, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44328, 0.0622]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
bb2cae22c5103043e16a9cba3c3fa5babe191ee1
[REMOVED]
{"Source-Url": "http://usir.salford.ac.uk/id/eprint/56624/1/PROFES.pdf", "len_cl100k_base": 8587, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 39566, "total-output-tokens": 11042, "length": "2e13", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.0003299713134765625, "__label__crime_law": 0.0002765655517578125, "__label__education_jobs": 0.0017328262329101562, "__label__entertainment": 4.607439041137695e-05, "__label__fashion_beauty": 0.00014090538024902344, "__label__finance_business": 0.0004911422729492188, "__label__food_dining": 0.0002727508544921875, "__label__games": 0.00042176246643066406, "__label__hardware": 0.0005321502685546875, "__label__health": 0.0003633499145507813, "__label__history": 0.00021135807037353516, "__label__home_hobbies": 8.028745651245117e-05, "__label__industrial": 0.0003037452697753906, "__label__literature": 0.00018799304962158203, "__label__politics": 0.00026106834411621094, "__label__religion": 0.00032901763916015625, "__label__science_tech": 0.0044708251953125, "__label__social_life": 0.0001061558723449707, "__label__software": 0.0039825439453125, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002865791320800781, "__label__transportation": 0.0004749298095703125, "__label__travel": 0.00019168853759765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49375, 0.03244]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49375, 0.23338]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49375, 0.93218]], "google_gemma-3-12b-it_contains_pii": [[0, 1005, false], [1005, 3513, null], [3513, 6710, null], [6710, 9717, null], [9717, 12932, null], [12932, 15777, null], [15777, 19047, null], [19047, 20589, null], [20589, 23738, null], [23738, 26214, null], [26214, 29268, null], [29268, 32292, null], [32292, 35742, null], [35742, 39264, null], [39264, 42448, null], [42448, 45796, null], [45796, 49375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1005, true], [1005, 3513, null], [3513, 6710, null], [6710, 9717, null], [9717, 12932, null], [12932, 15777, null], [15777, 19047, null], [19047, 20589, null], [20589, 23738, null], [23738, 26214, null], [26214, 29268, null], [29268, 32292, null], [32292, 35742, null], [35742, 39264, null], [39264, 42448, null], [42448, 45796, null], [45796, 49375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49375, null]], "pdf_page_numbers": [[0, 1005, 1], [1005, 3513, 2], [3513, 6710, 3], [6710, 9717, 4], [9717, 12932, 5], [12932, 15777, 6], [15777, 19047, 7], [19047, 20589, 8], [20589, 23738, 9], [23738, 26214, 10], [26214, 29268, 11], [29268, 32292, 12], [32292, 35742, 13], [35742, 39264, 14], [39264, 42448, 15], [42448, 45796, 16], [45796, 49375, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49375, 0.11656]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
98e58ae8f271119a3882d037ee4ee2b26afa7ac7
Proof rules for probabilistic loops Carroll Morgan* 15 August 1996 Abstract Probabilistic predicate transformers provide a semantics for imperative programs containing both demonic and probabilistic nondeterminism. Like the (standard) predicate transformers popularised by Dijkstra, they model programs as functions from final results to the initial conditions sufficient to achieve them. This paper presents practical proof rules, using the probabilistic transformers, for reasoning about iterations when probability is present. They are thoroughly illustrated by example: probabilistic binary chop, faulty factorial, the martingale gambling strategy and Herman’s probabilistic self-stabilisation. Just as for traditional programs, weakest-precondition based proof rules for program derivation are an important step on the way to designing more general refinement techniques, or even a refinement calculus, for imperative probabilistic programming. 1 Introduction The standard predicate transformers described by Dijkstra [3] provide a model in which a program is a function: it takes a set of desired final states to the set of all initial states from which the program’s execution is guaranteed to produce one of those final states. Regarding sets of states as predicates over the state space, programs are thus predicate transformers. A conspicuous feature of Dijkstra’s presentation was the appearance of demonic nondeterminism, a form of choice in programs over which the users have no control and about which nothing can be predicted. It arises naturally in the predicate-transformer approach, and as a result benefits from a particularly simple treatment there. In the work of Kozen [12] demonic nondeterminism is replaced by probabilistic nondeterminism. Probabilistic nondeterminism is not controllable by the user (either), but it is to some extent predictable: in repeated runs of a program \[ \text{coin} := \text{heads} \ 1 \oplus \ \text{coin} := \text{tails}, \] one would have the same expectations about the final value of the program variable \(\text{coin}\) as one would have about the repeated flipping of a real coin. We have extended the above [18], presenting a system in which demonic and probabilistic nondeterminism are treated together in a simple way: as well as building on the original work of * Morgan is a member of the Probabilistic Systems Group within the Programming Research Group at Oxford University: the other members are Annabelle McIver, Jeff Sanders and Karen Seidel. Our work is supported by the EPSRC. Dijkstra and of Kozen, we took advantage of later work by Claire Jones and Gordon Plotkin [10] and a ‘relational’ probabilistic model proposed by JiFeng He [7] (who used ‘convex closure’ [19] to generalise an earlier imperative model due to Kozen [11]). One of the principal results of our earlier work [18] is the exact determination of the ‘healthiness conditions’ that apply to probabilistic predicate transformers; they generalise the conditions given by Dijkstra for standard predicate transformers. Our overall aim is to broaden the scope of refinement methods to include more aspects of ‘real’ system design, in this case that the ultimate components from which a system is built are never entirely reliable. When their unreliability can be quantified, probabilistic program derivation, or refinement, can be used to match low-level unreliability of components to high-level ‘tolerable’ unreliability in a specification. The contribution of this paper specifically is to use the probabilistic healthiness conditions to propose and justify methods for the treatment of probabilistic loops; in that way we move the theory [18] towards everyday practice. The main theorems concern probabilistic invariants and variants, and generalise the corresponding standard theorems; our probabilistic healthiness conditions are crucial to their proofs, and to the separate treatment of partial and total correctness. Informally, the use of invariants is just as in standard programs, based on the work of Hoare and Floyd [9, 4]: the invariant is established initially; it is maintained; and on termination additionally the negation of the repetition condition holds. Here however we use probabilistic invariants, as anticipated by Kozen, by Sharir, Pnueli and Hart [24], and finally by Jones [12, 10]; we have generalised their work by treating nondeterminism as well. The probabilistic variant rule (and the related ‘0-1 Law’) was earlier proposed by Hart, Sharir and Pnueli [6] and shown to be sound and finitarily complete: a variant function must be bounded above and below, and have a nonzero probability of decrease. Our contribution here is to express that rule at the level of probabilistic predicate transformers, reproducing the proofs of soundness and finitary completeness in that context. We achieve a slight generalisation in that catastrophic failure (divergence, or abort) is included naturally as a possible behaviour of programs in our model. Sections 3–4 give the main theorems for the use of invariants and the way in which they are combined with information about loop termination; they are illustrated by the examples of Sec. 5, chosen to reveal the various combinations of probabilistic and standard variants and invariants. Sections 6–8 treat termination on its own. Section 9 provides a final example, a recent ‘showcase’ for probabilistic formalisms in which certain termination is the principal feature. 2 Probabilistic predicate transformers Standard predicates are sets of states, and can thus be regarded as characteristic functions from the state space to \{0, 1\}. In practice — that is, for reasoning about specific programs — they are written as Boolean-valued expressions (formulae) over program variables. Probabilistic predicates are functions from the state space to the entire closed interval \([0, 1]\).\(^1\) In practice they are written as real-valued expressions over the program variables. \(^1\)Elsewhere [18] we take a more general but equivalent view, that they are functions into the non-negative reals \([0, \infty)\). The manipulation of the predicate transformers in the two systems — standard and probabilistic — is very similar. For example, in both cases assignment is syntactic substitution, sequential composition (of programs) is functional composition (of the predicate transformers) and recursion is given by least fixed points. For a full presentation we refer the reader to our other publications [23, 18]. Because symbols may be confused in the probabilistic case, however, we adopt the following notational conventions to separate them as much as possible. **Notation 2.1** Standard predicates are Boolean expressions over the state variables, and are written in the normal way. (We use ⇔ for bi-implication.) Probabilistic predicates are real-valued expressions between 0 and 1 inclusive. The brackets \([\cdot]\) convert a standard predicate to a probabilistic predicate so that \([true]\) is (the constant expression) 1 and \([false]\) is 0. The overbar operator denotes subtraction from 1, so that \([\overline{P}]\) is the same as \([\neg P]\) for standard predicate \(P\). Minimum and maximum are written \(\sqcap, \sqcup\) respectively, with \(\sqcap\) binding more tightly. The relations ‘everywhere no more than’, ‘everywhere equal’ and ‘everywhere no less than’ between probabilistic predicates are written \(\Rightarrow, \equiv, \Leftrightarrow\) respectively. With the above conventions we have for example that \[ \frac{1}{2} \Rightarrow [x \leq 0]/2 + [x \geq 0]/2 \] because for all values of the state-variable \(x\) the right-hand side is at least \(1/2\). However the stronger claim \[ \frac{1}{2} \equiv [x \leq 0]/2 + [x \geq 0]/2 \] is false, since when \(x\) is 0 the left-hand side is \(1/2\) but the right-hand side is 1. The basic properties of predicate transformers needed for our presentation are collected in App. B, and are referred to here as ‘facts’. Those concerning \(wp\) are consequences of the healthiness laws [18]. We make essential use also of weakest liberal probabilistic preconditions in some of our proofs; the facts concerning them are proved elsewhere [21]. Since \(wlp\) does not appear in the statements of the principal theorems, however, the \(wlp\)-theory is not needed for use of our results. **Notation 2.2** We write \(f.x\) for the function \(f\) applied to the argument \(x\) (rather than \(f(x)\)). The application operator \(\cdot\) is left associative. **Notation 2.3** We write \(:=\) for ‘is defined to be’. ### 3 Partial loop correctness The *weakest liberal precondition* of a program describes its *partial* correctness, identifying those initial states from which the program either establishes a given postcondition or fails to terminate [3]. The more conventional *weakest* (not liberal) *precondition* requires termination as well, and thus describes *total* correctness. We write \(wlp.\text{prog}.Q\) and \(wp.\text{prog}.Q\) for the weakest liberal and weakest preconditions respectively of program \(\text{prog}\) and postcondition \(Q\). --- 2Section 4 of Morgan et al. [18] summarises the probabilistic wp-rules (Fig. 2 there) and treats some elementary examples. Section 7 (Fig. 4) collects the healthiness conditions. The \textit{wlp} semantics differs from the \textit{wp} in these two respects.\footnote{For uniformity we use probabilistic predicates throughout, even for standard programs, writing \textit{wp}.(x := y).\{x = y\} ≡ 1 for example rather than the conventional \textit{wp}.(x := y).\{x = y\} ≡ \text{true}.} 1. The nowhere-terminating program is defined \textit{wlp}.\texttt{abort}.\texttt{Q} := 1 for all postconditions \texttt{Q}. (Compare \textit{wp}.\texttt{abort}.\texttt{Q} := 0.) 2. The weakest liberal precondition semantics of a recursive program is given by a \textit{greatest} (rather than least) fixed point. When considering loops, a special case of recursion, the \textit{wlp} semantics is therefore as given in Def. 3.3 following. \textbf{Notation 3.1} The program \textit{loop} is defined \[ \textit{loop} := \text{do } G \rightarrow \text{body od ,} \] for standard predicate \textit{G} (the loop guard) and program \textit{body} (the loop body). In calculations we always treat \textit{G} as a probabilistic predicate, even though it takes only standard values, thus avoiding clutter by omitting the brackets \([G]\). \textbf{Notation 3.2} We use \(\eta\) to indicate greatest fixed point. \textbf{Definition 3.3 Weakest liberal precondition for loop.} For any postcondition \texttt{Q} we define \[ \textit{wlp}.\textit{loop}.\texttt{Q} := (\eta P \cdot G \sqcap \textit{wlp}.\textit{body}.P \sqcup \overline{G} \sqcap \texttt{Q}) . \] From Def. 3.3 we derive immediately the usual rule for partial correctness of loops, based on the preservation of an invariant. \textbf{Lemma 3.4} Let predicate \textit{I} be a \textit{wlp}-invariant of \textit{loop}, thus satisfying \[ G \sqcap \textit{I} \Rightarrow \textit{wlp}.\textit{body}.\textit{I}. \] Then in fact \[ \textit{I} \Rightarrow \textit{wlp}.\textit{loop}.(\overline{G} \sqcap \textit{I}) . \] \textbf{Proof:} We substitute \textit{I} for \textit{P} in the right-hand side of Def. 3.3, setting \texttt{Q} := \overline{G} \sqcap \textit{I}, and find \[ \begin{align*} & G \sqcap \textit{wlp}.\textit{body}.\textit{I} \sqcup \overline{G} \sqcap (\overline{G} \sqcap \textit{I}) \\ \Leftrightarrow & G \sqcap (G \sqcap \textit{I}) \sqcup \overline{G} \sqcap (\overline{G} \sqcap \textit{I}) \\ \equiv & \textit{I} , \end{align*} \] obtaining the result immediately from the elementary property of greatest fixed points that \(x \leq f.x\) implies \(x \leq \eta.f\). It is worth noting that the assumption (1) of Lem. 3.4 is weaker than the one used in the standard rule for partial correctness of loops, where one conventionally finds \( wp \) instead: \[ G \sqcap I \implies wp.body.I . \] (3) The difference is real, even in the standard case, but only if we are genuinely interested in partial correctness. With Lem. 3.4 we can show for example \[ wp.(\text{do } x \neq 0 \rightarrow \text{abort od}).[x = 0] \equiv 1 , \] (4) choosing \( I := 1 \) to do so: if the loop terminates then it establishes \( x = 0 \). The reason that (3) is used in the standard case is that it suffices for total correctness of the loop, and avoids introducing the extra concept of \( wlp \): if indeed \( I \implies wp.loop.1 \) then we must have \( G \sqcap I \implies wp.body.1 \) in any case, making (1) and (3) equivalent. For probabilistic programs the above analysis does not apply, and as shown below use of the stronger (3) is required for soundness in general (Ex. 4.7). 4 Total loop correctness In the standard case Fact B.1 is used to combine partial loop correctness with a termination argument, to give total loop correctness. Here we rely on its probabilistic analogue Fact B.2. **Notation 4.1** The binary operator \( \& \) is defined \[ Q_0 \& Q_1 := (Q_0 + Q_1 - 1) \sqcup 0 . \] It is easily checked that \( Q_0 \& Q_1 \equiv Q_0 \sqcap Q_1 \) when either \( Q_0 \) or \( Q_1 \) is standard, and thus that Fact B.1 results when Fact B.2 is specialised to \( Q_0, Q_1 := Q, 1 \) for standard \( Q \). We have further that \( \& \) is commutative and associative with identity \( 1 \). With Fact B.2 and Lem. 3.4 we have immediately a rule for total correctness of probabilistic loops. **Notation 4.2** The termination condition of \( \text{loop} \) is defined \[ T := wp.loop.1 . \] 4The reasoning fails at the point of concluding \( G \sqcap I \equiv wp.body.I \) from \[ G \sqcap I \implies wp.body.1 \quad \text{and} \quad G \sqcap I \implies wp.body.I . \] Applying Fact B.2 to those two inequalities gives only \[ wp.body.I \equiv \text{wp.body}(1 \& I) \iff wp.body.I \& wp.body.1 \iff (G \sqcap I) \& (G \sqcap I) \equiv G \sqcap (I \& I) , \] strictly weaker than the above when \( I \) is probabilistic since in that case \( I \& I \neq I \). Note for example that \[ 1/2 \equiv \text{wp.}(x : = 0 \& \text{abort}).[x = 1] \] and \[ 1/2 \equiv \text{wp.}(x : = 0 \& \text{abort}).1 \] do not imply \( 1/2 \equiv \text{wp.}(x : = 0 \& \text{abort}).[x = 1] \) — which indeed is not true. Lemma 4.3 Let invariant $I$ satisfy $G \cap I \Rightarrow \text{wlp.body.I}$. Then $$I \& T \Rightarrow \text{wp.loop.}(G \cap I) .$$ Proof: $$\text{wp.loop.}(G \cap I) \equiv \text{wp.loop.}((G \cap I) \& 1) \Leftarrow \text{wp.loop.}(G \cap I) \& \text{wp.loop.}1 \Leftarrow I \& T .$$ Lemma 4.3 suffices for many situations, in particular those in which either $I$ or $T$ is standard since in that case $I \& T \equiv I \cap T$. When both $I$ and $T$ are probabilistic, however, the precondition of Lem. 4.3 can be too low (pessimistic, though still correct). But as the following\(^5\) shows, we cannot just replace $\&$ by $\cap$ on the left-hand side. Example 4.4 Take invariant $I := [n = 0]/2 + [n = 1]$ in the program $\text{loop}$, defined $$\begin{align*} d & n = 0 \rightarrow n := -1 & \oplus n := +1 \\ n & > 0 \rightarrow \text{skip} \\ o & \end{align*}$$ Then we have $$ T \equiv [n < 0] + [n = 0]/2 $$ $$I \cap T \equiv [n = 0]/2 $$ $$I \cap G \equiv 0 , $$ but $I \cap G \equiv [n = 0]/2 \not\Rightarrow \text{wp.loop.}0 \equiv \text{wp.loop.}(G \cap I) . Thus we improve Lem. 4.3 in a different way, below, where for simplicity we assume that $\text{body}$ is deterministic.\(^6\) The strategy is to develop a larger invariant $I'$ than the $I$ we are given, so that when we eventually form the precondition $I' \& T$ we recover the original $I$. First we show strict $\text{wp}$-invariance of $T$ itself. Lemma 4.5 For all $\text{loop}$ we have $G \cap T \equiv G \cap \text{wp.body.T}$. Proof: We calculate $$ G \cap T \equiv G \cap \text{wp.loop.1} \equiv G \cap (G \cap \text{wp.body.}(\text{wp.loop.1}) \sqcup G \cap 1) \quad \text{wp-semantics of do } \cdots \text{od} \equiv G \cap \text{wp.body.T .} $$ \(^5\)Annabelle McIver suggested this possibility, and found the counterexample. \(^6\)We recall [18] that deterministic means 'contains no demonic nondeterminism unless aborting', so that for example $$x := 0 \ 1/2 \oplus \text{abort} $$ is deterministic (as is $\text{abort}$ itself). Syntactically, deterministic means (roughly) 'contains no explicit nondeterministic choice operation'; our excuse for calling (for example) a coin-flipping program deterministic is that the distribution of its results is predictable. (Also, it is maximal in the refinement ordering.) We now prove our main theorem for total correctness of deterministic loops; note that we assume a \( wp \)-invariance property (stronger than the \( wlp \)-invariance assumption of Lem. 4.3). **Theorem 4.6** If \( I \) is a \( wp \)-invariant of loop with deterministic body, and \( I \Rightarrow T \), then \[ I \Rightarrow wp.loop.(\overline{G} \cap I) \] **Proof:** We show first that \( wp \)-invariance of \( I \) implies \( wlp \)-invariance of \[ I':= I + 1 - T \] Note we rely on \( I \Rightarrow T \) for well-definedness (that \( I' \Rightarrow 1 \)). We reason \[ \begin{align*} wp.body.I' &\equiv wlp.body.(I + (1 - T)) & \text{definition } I' \\ &\equiv wp.body.I + wlp.body.1 - wp.body.T & \text{Fact B.3 twice; body deterministic} \\ &\equiv wp.body.I + 1 - wp.body.T & \text{Fact B.4} \end{align*} \] \[ \begin{align*} \Leftrightarrow G \cap (wp.body.I + 1 - wp.body.T) &\equiv G \cap wp.body.I + G - G \cap wp.body.T & \text{assumed } wp \text{-invariance of } I \\ &\equiv G \cap wp.body.I + G - G \cap T & \text{Fact B.4} \\ &\equiv G \cap (I + 1 - T) & \text{G standard} \\ &\equiv G \cap I' & \text{definition } I' \end{align*} \] From Lem. 4.3 we then conclude immediately \[ I \equiv I' \& T \Rightarrow wp.loop.(\overline{G} \cap I') \equiv wp.loop.(\overline{G} \cap I), \] since for the last step we have \[ \begin{align*} \overline{G} \cap I' &\equiv \overline{G} \cap (I + 1 - T) \\ &\equiv \overline{G} \cap I + \overline{G} - \overline{G} \cap T & \text{G standard} \\ &\equiv \overline{G} \cap I & \overline{G} \text{ implies immediate termination: thus } \overline{G} \equiv \overline{G} \cap T \end{align*} \] Thm. 4.6 is extended to the nondeterministic case by Thm. A.3, and it is not hard to show that the latter in turn implies Lem. 4.3: thus they are of equal power. The following example shows the \( wp \) - (rather than \( wlp \) -) invariance of the invariant \( I \) to be necessary for soundness of Thm. 4.6 in general. (Recall from Sec. 3 that it is not necessary in the standard case.) **Example 4.7** For this example let loop be \[ \begin{align*} do b &\rightarrow \\ & b:= false \ 1/2 \oplus \text{ abort} \\ \od, \end{align*} \] for Boolean \( b \), and note that we have for termination \[ T \equiv [-b] + [b]/2. \] Define \( I := 1/2 \), so that \( I \Rightarrow T \) as required by Thm. 4.6, and reason \[\square\] \[ wlp.\text{body}.I \] \[ \equiv wlp.(b: = \text{false}_1/2 \oplus \text{abort}).(1/2) \] \[ \equiv (1/2)(wlp.(b: = \text{false}).(1/2)) + (1/2)(wlp.\text{abort}.(1/2)) \] \[ \equiv (1/2)(1/2) + (1/2)(1) \] \[ \equiv 3/4 \] \[ \iff [b] \cap 1/2 \] \[ \equiv G \cap I \] to show \( wlp \)-invariance of \( I \), the other requirement of the theorem. But \[ wp.\text{loop}.((\overline{G} \cap I) \] \[ wp.\text{loop}.([\neg b]/2) \] \[ \equiv wp.(\text{if } b \text{ then } b: = \text{false}_1/2 \oplus \text{abort } \text{fi}).([\neg b]/2) \] \[ \equiv [b] \cap ((1/2)(1/2) + (1/2)(0)) \sqcup [-b] \cap [-b]/2 \] \[ \equiv [b]/4 \sqcup [-b]/2 , \] showing the conclusion of Thm. 4.6 to be false in this case: the precondition \([b]/4 \sqcup [-b]/2\) is not at least \( I \), since when \( b \) for example the former is \( 1/4 \) and the latter \( 1/2 \). \[ \square \] 5 Three examples of total correctness With Thm. 4.6 we are able to discover total correctness properties of loops, provided we are given their termination conditions. Rigorous termination arguments themselves are the subject of Sec. 7 below; here we treat termination informally. The examples below illustrate the interplay of invariant and termination condition in the three possible probabilistic cases: one, the other, or both are probabilistic. 5.1 Uniform binary selection In this example the termination condition is standard, indicating either certain termination (when 1) or failure to terminate (when 0). Given a positive integer \( N \), an integer \( l \) is to be chosen uniformly so that \( 0 \leq l < N \); the method is by successive divisions of the choice interval into roughly equal halves. Example 5.1 Let \( \text{prog, init and loop} \) be as in Fig. 1: given arbitrary integer \( C \), we are interested in the probability that \( l = C \) finally. We define \[ I \ := \ [l \leq C < h]/(h - l) , \] and with the following calculation show it to be invariant.\(^7\) In the calculation, we start with the overall postcondition and reason backwards towards the precondition, indicating between predicates when \( wp \) is applied to give the lower (the right-hand side of a reasoning step) from the upper (the left-hand side). We have \(^7\)We define \( 0/0 := 0 \) for convenience, so that \( I \) is identically 0 when \( l = h \). For any \( x \neq 0 \) we stipulate further that \( x/0 \) is some real number, but we do not care which. Note that the program itself does not divide by 0. \begin{align*} \text{init} & \rightarrow \quad l, h := 0, N; \\ \text{loop} & \rightarrow \quad \text{do} \quad l + 1 \neq h \rightarrow \\ & \quad p := \frac{(l + h)}{2}; \\ & \quad l := p \quad \text{h} := p \\ & \text{od} \end{align*} The program prog is the whole of the above. We write \( m \oplus n \) as a convenient abbreviation for \( m/(m+n) \oplus \). Figure 1: Example 5.1, uniform binary selection. \[ \frac{[l \leq C < h]}{(h - l)} \] \[\equiv \quad ((h - p)/(h - l))[p \leq C < h]/(h - p) \quad \text{after applying wp.} (l := p_{h-p \oplus p-1} \quad \text{h} := p) \] \[+ \quad ((p - l)/(h - l))[l \leq C < p] / (p - l) \] \[\equiv \quad \text{in spite of 0-divisions: lower is 0 whenever upper contains divisions by 0} \] \[\text{[l < p < h]} \cap \text{[l \leq C < h]} / (h - l) \] \[\equiv \quad \text{[l < (l + h) / 2 < h]} \cap \text{[l \leq C < h]} / (h - l) \quad \text{after applying wp.} (p := (l + h) / 2) \] \[\equiv \quad \text{[l + 1 \neq h]} \cap \text{[l \leq C < h]} / (h - l) , \] as required. Standard reasoning with variant \( h - l \) shows that termination is certain because \( N > 0 \) initially; thus \( T \equiv 1 \), implying \( I \Rightarrow T \) trivially, and we have immediately from Thm. 4.6 that \[ \frac{[l \leq C < h]}{(h - l)} \equiv I \Rightarrow wp.\text{loop.}(C \cap I) \equiv wp.\text{loop.}[l = C] , \] and so finish with \[ \frac{[0 \leq C < N]}{N} \equiv wp.\text{init.}([l \leq C < h] / (h - l)) \quad \Rightarrow \quad wp.\text{prog.}[l = C] . \] We conclude overall that for any integer \( C \) the probability of prog’s setting \( l \) to \( C \) finally is at least \( 1/N \) provided \( 0 \leq C < N \), and that since there are exactly \( N \) such values for \( C \) we have achieved uniform selection from the given interval. The probability is (only) at least 0 otherwise — when \( C \) lies outside the interval we should assume that we have ‘no chance’ of establishing \( l = C \) finally. \( \square \) Note that the proof in Ex. 5.1 of invariance of \( I \) would succeed even if \( p \) were chosen nondeterministically between \( l \) and \( h \) rather than being assigned the specific value \( (l + h) / 2 \). In that case we would appeal to the more general Thm. A.3 to reach the same conclusion.\(^8\) \(^8\)In the notation of a refinement calculus \([16]\) such a program choose could be written \( p : [l + 1 \neq h, l < p < h] \), with meaning given by \[ wp.\text{choose.} Q := \quad [l < p < h] \cap (\forall p \mid l < p < h \cdot Q) . \] \textit{init} \rightarrow \quad n, f := N, 1; \textit{loop} \rightarrow \quad \text{\textbf{do}} \ n \neq 0 \rightarrow \\ \quad f := f \times n; \quad n := n - 1 \quad p \oplus n := n + 1 \textbf{od} The program \textit{prog} is the whole of the above. The decrementing of \( n \) fails probabilistically, sometimes incrementing instead. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{example_5.2} \caption{Example 5.2, faulty factorial.} \end{figure} \section*{5.2 Faulty factorial} In this example both the termination condition and the invariant are probabilistic. Given a natural number \( N \), the program is to (attempt to) set \( f \) to \( N! \) in spite of its containing a probabilistically faulty subtraction. \begin{example} The program is shown in Fig. 2, and is the conventional factorial algorithm except that the decrement of \( n \) sometimes increments instead.\footnote{Perhaps it is struck by a cosmic ray.} When \( p \) is 1, making the program standard (and decrementing of \( n \) certain), the invariant \( N! = f \times n! \) suffices in the usual way to show that \( \text{wp} \cdot \text{prog} \cdot [f = N!] \equiv 1 \). In general, however, that postcondition is achieved only if the decrement alternative is chosen on each of the \( N \) executions of the loop body, thus with probability \( p^N \). More rigorously we define invariant \( I := p^n[N! = f \times n!] \), showing its preservation with the calculation \[ p^n[N! = f \times n!] \equiv p(p^{n-1})[N! = f \times (n-1)!] \quad \text{after applying wp.} (n := n - 1 \ p \oplus n := n + 1) + \prod(p^{n+1})[N! = f \times (n + 1)!] \] \[ \equiv p^n[N! = f \times (n - 1)!] \quad \text{dropping the right additive term} \equiv p^n[N! = f \times n \times (n - 1)!] \quad \text{after applying wp.} (f := f \times n) \equiv [n \neq 0] \cap p^n[N! = f \times n], \] as required. The exact termination condition depends on \( p \). Standard random walk results \cite{5} show that \textit{loop} terminates certainly when \( p \geq 1/2 \), but with probability only \((p/p)^n\) otherwise. In either case, however, the termination condition is at least \( p^n \) and so exceeds the invariant: thus Thm. 4.6 applies. We conclude \[ \text{wp} \cdot \text{prog} \cdot [f = N!] \iff \text{wp} \cdot \text{init} \cdot (p^n[N! = f \times n!]) \iff p^N[N! = 1 \times N!] \equiv p^N, \] as suggested by our informal analysis earlier. \end{example} \[ \text{init} \rightarrow \quad c, b := C, 1; \] \[ \text{loop} \rightarrow \quad \text{do } b \neq 0 \rightarrow \] \[ \quad \text{if } b \leq c \text{ then} \] \[ \quad \quad c := c - b; \] \[ \quad \quad c, b := c + 2b, 0 \quad 1/2 \oplus b = 2b \] \[ \text{fi} \] \[ \text{od} \] The program \( \text{prog} \) is the whole of the above. The gambler’s capital \( c \) is initially \( C \), and his intended bet \( b \) is initially 1. On each iteration, if his intended bet does not exceed his capital, he is allowed to place it and has \( 1/2 \) chance of winning. If he wins, he receives twice his bet in return, and sets his intended bet to 0 to indicate he is finished; if he loses, he receives nothing and doubles his intended bet — hoping to win next time. If he loses sufficiently often (in succession), his intended bet \( b \) will eventually be more than he can afford — his remaining capital \( c \) — and he will then be ‘trapped’ forever within the iteration. Figure 3: Example 5.3, the martingale. 5.3 The martingale Here the termination condition is probabilistic and the invariant is standard. The martingale is the gambling strategy of doubling one’s bet after each loss of an even wager: since the wager is won eventually, with probability 1, an overall profit seems guaranteed.\(^{10}\) As is well known however, the flaw with the martingale is that the gambler runs the risk of using all his capital before the probabilistically certain win: his capital is finite, but the number of bets before the eventual win can be arbitrarily large. Example 5.3 We model the martingale as in Fig. 3. If the gambler cannot place his bet, because his capital has become too small, he simply remains within the loop. It is easy to show that \( I := [c + b = C + 1] \) is an invariant of loop; and with some arithmetic it can be shown informally that — with the given initialisation — the chance of losing consistently until the capital is exhausted is \( 2/P \), where \( P \) is the smallest power of two exceeding \( C+1 \). Thus \( \text{wp} \cdot \text{prog} \cdot 1 \) is just \( 2/P \). There are two problems in applying Thm. 4.6 at this point, however. The first is that, although \( 2/P \) is the termination condition of \( \text{prog} \) as a whole, we have not established the termination condition of \( \text{loop} \) itself; though we could calculate it, it would be a messy expression in terms of general initial values for \( b \) and \( c \). The second problem is that the invariant is not less than the termination condition: after the initialisation shown, for example, we have \( I \equiv 1 \) but \( 1 \not\rightarrow T \). Both problems can be solved by using Lem. 4.3 in this case rather than Thm. 4.6; whatever \( T \) is in general, still we have \[ I \& T \] \(^{10}\)Karen Seidel suggested using the martingale for this example. \[\begin{align*} \equiv & \ [c + b = C + 1] \& T \\ \equiv & \ 1 \& \wp \text{.prog.1} & \text{after applying } \wp.(c, b = C, 1) \\ \equiv & \ 1 \& \frac{2}{P} \\ \equiv & \ \frac{2}{P}, \end{align*}\] showing that indeed \[\wp \text{.prog.}[c = C + 1] \iff \frac{2}{P}.\] With probability at least \(\frac{2}{P}\) the gambler eventually increases his capital by exactly 1. \(\square\) ## 6 The 0-1 law for termination Beyond its use for specific programs, Thm. 4.6 has a general consequence that will be of importance to our later analysis of termination.\(^{11}\) The 0-1 Law of Hart et al. [6] reads informally as follows. Let process \(P\) be defined over a state space \(S\), and suppose that from every state in some subset \(S'\) of \(S\) the probability of \(P\)'s eventual escape from \(S'\) is at least \(p\), for some fixed \(p > 0\). Then \(P\)'s escape from \(S'\) is certain, occurring with probability 1. More succinctly one could say that the infimum over \(S'\) of eventual escape probability is either 0 or 1: it cannot lie properly in between. Note that we do not require that for every state in \(S'\) the probability of immediate escape is at least \(p\) — that is a much stronger condition, from which the certainty of eventual escape is obvious. In our context we fix loop and choose an invariant \(I\): the process is then the iteration of body, leading to eventual escape from the set of states \(G \cap I\) — leading thus equivalently to eventual termination of the loop. The 0-1 Law in that form is easily proved from our Thm. 4.6. **Lemma 6.1** Let \(I\) be a \(wp\)-invariant of loop with termination condition \(T\) (as in Not. 4.2).\(^{12}\) If for some fixed probability \(p > 0\) we have \(p(I) \Rightarrow T\), then in fact \(I \Rightarrow T\). **Proof:** With \(G\)'s being standard, \(wp\)-invariance of \(I\) and Fact B.6 we have \[G \cap p(I) \equiv p(G \cap I) \Rightarrow p(\wp\text{.body}.I) \equiv \wp\text{.body}.(p(I)),\] so that also \(p(I)\) is a \(wp\)-invariant of loop. We then reason \[p(I) \begin{align*} \Rightarrow & \ wp\text{.loop}.(G \cap p(I)) & \text{wp-invariance of } p(I); p(I) \Rightarrow T, \text{ Thm. 4.6} \\ \equiv & \ wp\text{.loop}.(p(G \cap I)) & \text{G standard} \\ \equiv & \ p(\wp\text{.loop}.(G \cap I)) & \text{Fact B.6} \\ \Rightarrow & \ p(\wp\text{.loop}.1) & \text{Fact B.7} \\ \equiv & \ p(T) & \text{definition } T \end{align*}\] and, since \(p \neq 0\), our result follows by dividing both sides by \(p\). \(\square\) Aside from its intrinsic interest, the importance of Lem. 6.1 is that it will give us a very general variant-based argument for establishing termination of probabilistic loops. \(^{11}\)From this point we will refer to Theorems 4.6 and A.3 together as just Thm. 4.6. All results proved hold in the general nondeterministic case. \(^{12}\)This lemma holds even for probabilistic \(I\) — but in that case one cannot speak so readily of ‘states satisfying \(I\)’ in our informal discussion. 7 Probabilistic variant arguments Termination of standard loops is conventionally shown using ‘variants’ based on the state: they are integer-valued expressions over the state variables that are bounded below but still strictly decreased by each iteration of the loop. That method is complete (up to expressibility) since informally one can always define a variant \[ \text{variant} \ := \ \text{‘the largest number of iterations still possible from the current state’}, \] which satisfies the above conditions trivially if the loop indeed terminates. For probabilistic programs however the standard variant method is not complete (though clearly it remains sound): for example the program \[ \begin{align*} d &\ (n \mod N) \neq 0 \\ &\ n := n + 1 \quad n := n - 1 \end{align*} \] over natural number \( n \) is certain to terminate, yet from the fact that its body can both increment and decrement \( n \) it is clear there can be no strictly decreasing variant. With the 0-1 Law of Lem. 6.1 we are able to justify the following variant-based rule for probabilistic termination, sufficient for many practical cases including (5). In Sec. 8 we show it complete over finite state spaces. **Lemma 7.1** Let \( V \) be an integer-valued expression in the program variables, defined at least over some subset \( I \) of the state space \( S \). Suppose further that for iteration \( \text{loop} \) 1. there are fixed integer constants \( L \) (low) and \( H \) (high) such that \[ G \cap I \ \Rightarrow \ [L \leq V < H], \] and 2. the subset \( I \), as a (standard) predicate, is at least\(^{13} \) \( \text{wp} \)-invariant for \( \text{loop} \) and 3. for some fixed probability \( p > 0 \) and for all integers \( N \) we have \[ p(G \cap I \cap [V = N]) \ \Rightarrow \ \text{wp.body.}[V < N]. \] Then termination is certain from any state in which \( I \) holds: we have \( I \Rightarrow T \), where \( T \) is the termination condition of \( \text{loop} \). **Proof:** We show first that Assumption 2 allows Assumption 3 to be strengthened as follows: \[ \begin{align*} \text{wp.body.}(I \cap [V < N]) \quad &\equiv \quad \text{wp.body.}(I \cap [V < N]) \quad \text{standard predicates} \\ \Leftrightarrow \quad \text{wp.body.}I \ \& \ \text{wp.body.}[V < N] \quad &\Leftarrow \quad \text{wp.body.}I \ \& \ \text{wp.body.}[V < N] \quad \text{Fact B.2} \\ \Leftrightarrow \quad (G \cap I) \ \& \ p(G \cap I \cap [V = N]) \quad &\Leftarrow \quad (G \cap I) \ \& \ p(G \cap I \cap [V = N]) \quad \text{Assumptions 2,3} \\ \equiv \quad p(G \cap I \cap [V = N]) \quad &\equiv \quad p(G \cap I \cap [V = N]) \quad \text{G, I standard} \end{align*} \] \(^{13}\)Being \( \text{wp} \)-invariant is a stronger requirement, therefore sufficient also. Thus we can add \((I \sqcap)\) to the right-hand side of Assumption 3. Now we continue with induction to show that for all \(n \geq 0\) we have \[ p^n(I \sqcap [V < L + n]) \Rightarrow T. \tag{6} \] For the base case we reason from Assumption 1 that \[ p^0(I \sqcap [V < L]) \Rightarrow \neg C \Rightarrow T. \] For the step case we reason \[ p^{n+1}(I \sqcap [V < L + n + 1]) \equiv \neg G \sqcap p^{n+1}(I \sqcap [V < L + n + 1]) \tag{G standard} \] \[ \Rightarrow G \sqcap p^{n+1}(I \sqcap [V < L + n + 1]) \sqcup T \quad \neg G \Rightarrow T \] \[ \equiv p^{n+1}(G \sqcap I \sqcap [V < L + n]) \sqcup T \quad \text{inductive hypothesis} \] \[ \Rightarrow p(T) \sqcup p^{n+1}(G \sqcap I \sqcap [V = L + n]) \sqcup T \quad p(T) \Rightarrow T; \text{Assumption 3 strengthened} \] \[ \equiv wp.body.(p^n(I \sqcap [V < L + n])) \sqcup T \quad \text{inductive hypothesis} \] \[ \Rightarrow wp.body.T \sqcup T \quad \text{fact B.6} \] \[ \equiv T. \quad \text{wp.body.T \Rightarrow T} \] With (6) now established, from Assumption 1 we can conclude \[ p^{H-L}(I) \equiv p^{H-L}(G \sqcap I) \sqcup p^{H-L}(\neg C \sqcap I) \tag{G standard} \] \[ \equiv p^{H-L}(G \sqcap I) \sqcup T \quad \neg C \Rightarrow T \tag{Assumption 1 (weakened)} \] \[ \Rightarrow T \sqcup T \quad \text{(6) above} \] \[ \equiv T. \quad \text{Assumption 1 (weakened)} \] That, with Assumption 2 and \(p^{H-L} \neq 0\), gives us \(I \Rightarrow T\) directly from Lem. 6.1. Informally, Lem. 7.1 shows termination given an integer-valued variant bounded above and below such that on each iteration a strict decrease is guaranteed with at least fixed probability \(p > 0\). Note that the probabilistic variant is allowed to increase — but not above \(H\). (We have emphasised the parts that differ from the standard variant rule.) The termination of Program (5) now follows immediately from Lem. 7.1 with variant \(n \mod N\), taking \(L, H : = 0, N\). In some circumstances it is convenient to use other forms of variant argument, variations on Lem. 8.1; one easily proved from it is the more conventional rule in which the variant is bounded below (but not necessarily above), must decrease with fixed probability \( p > 0 \) and cannot increase. That rule follows (informally) from Lem. 8.1 by noting that since the variant cannot increase its initial value determines the upper bound \( H \) required by the lemma, and it shows termination for example of the loop \[ \text{do } n > 0 \rightarrow \\ \quad n := n - 1 \quad \frac{1}{2} \oplus \text{skip} \\ \text{od,} \] for which variant \( n \) suffices with \( L := 0 \). 8 Finitary completeness of variants We now show that, if the state space is finite, the technique set out in Lem. 7.1 is complete for proving certain termination. We construct a variant explicitly: for any state it is the least \( n \) such that the iteration has nonzero probability of terminating in no more than \( n \) steps from that state. The following lemma establishes its existence and properties. **Lemma 8.1** Let the state space be \( S \), and take arbitrary loop. Note that \( [T] \equiv [wp.\text{loop}.1] \) is (the characteristic function of) that subset of \( S \) from which termination of loop is certain. Then there is an integer function \( V \) of the state such that whenever \( G \cap [T] \cap [V = N] \) holds (termination is certain but has not yet occurred) the probability of \( V \)'s strict decrease in the very next iteration is nonzero; more precisely, we construct \( V \) such that \[ G \cap [T] \cap [V = N] \implies [\text{wp.body}.[V < N]] \] for all \( N \). **Proof:** Define the \( \mathbb{N} \)-indexed probabilistic predicates \[ T_0 := \overline{C} \\ T_{n+1} := \overline{C} \cup \text{wp.body}.T_n, \] so that \( T_N \) is the probability of termination within \( N \) iterations. The variant is then given by \[ V := (\cap n \mid T_n > 0), \tag{7} \] which is well-defined in states where \( T \not\equiv 0 \) (and thus in particular where \( [T] \equiv 1 \)); define \( V \) arbitrarily otherwise. Then \( V = N \) in any state in \([T]\) means that, from that state, there is a nonzero probability of termination within \( N \) iterations. We now show that whenever \( G \cap [T] \cap [V = N] \) holds (is 1) the probability of establishing \( V < N \) on the very next iteration is nonzero. When \( N = 0 \) the result is trivial (antecedent false); for \( N > 0 \) we reason \[\text{To show that from any state satisfying the standard } P \text{ the program } \text{prog} \text{ has nonzero probability of establishing the standard } Q, \text{ we simply prove } P \implies [\text{wp.prog}.Q].\] establishing the desired inequality. We now use the lemma to show that if we assume finiteness of the state space the expression \( V \) constructed above satisfies the conditions of Lem. 7.1, so establishing completeness. **Theorem 8.2** The termination rule of Lem. 7.1 is sound and complete for certain termination over a finite state space. **Proof:** Soundness was established by Lem. 7.1 directly, even when the state space \( S \) is infinite. For completeness, note that when \( S \) is finite the expression (7) constructed in Lem. 8.1 is trivially bounded above and below. Similarly the probability of its decrease is bounded away from zero (being a finite infimum of positive quantities); in particular, we choose \( p \) for Lem. 7.1 to be the minimum, taken over the states in the finite set \( [T] \), of \([wp.body[V < N]]\) with \( N \) set to the value of the variant in that state. All that remains therefore is to show that \([T]\) is an invariant of \( \text{loop} \), so that we can take \( I := [T] \) in Lem. 7.1. For that we have \[ G \cap [T] \\ \equiv G \cap [wp.loop.1] \\ \equiv [G \cap wp.loop.1] \\ \Rightarrow [wp.body(wp.loop.1)] \\ \Rightarrow wp.body(wp.loop.1) \\ \equiv wp.body/[T], \] as required. \(\Box\) ### 9 Example: self-stabilisation In our final example we apply Lem. 8.1 to a variation on Herman’s probabilistic self-stabilisation [8], a distributed probabilistic algorithm that can be used for leadership election in a ring of synchronously executing processors. **Example 9.1** Consider \( N \) identical processors connected clockwise in a ring, as illustrated in Fig. 4. A single processor — a leader — is chosen from them in the following way. Initially each processor is given exactly 1 token; the leader is the first processor to obtain all \( N \) of them. Fix some probability \( p \) with \( 0 < p < 1 \). On each step (synchronously) all processors perform the following actions: 1. Make a local probabilistic decision either to pass (probability \( p \)) or to keep (probability \( 1 - p \)) all its tokens. 2. If pass, then send all its tokens to the next-clockwise processor; if keep, do nothing. Example ring topology ($N=6$), with initial token assignment shown. See reference M95 at http://www.comlab.ox.ac.uk/oucl/groups/probs/bibliography.html for this illustration. Figure 4: Example ring topology \((N = 6)\) with initial token assignment shown. See reference M95 at http://www.comlab.ox.ac.uk/oucl/groups/probs/bibliography.html for this illustration. Figure 5: The variant decreases with probability at least \(p(1 - p)\); it may increase. 3. Receive tokens passed (if any) from the next-anticlockwise processor, adding them to the tokens currently held (if any). We show that with probability 1 eventually a single processor will obtain all \(N\) tokens. We define - the invariant to be that the total number of tokens is constant (at \(N\)), and - the guard (which if true indicates that termination has not yet occurred) to be that more than one processor holds tokens and - the variant to be the shortest length of any ring segment (contiguous sequence of arcs) containing all tokens. (See Fig. 5.) With those definitions, for proof of termination we simply note that (refer assumptions of Lem. 7.1) 1. the guard and invariant imply that the variant is bounded below by 1 and above by \(N\), and 2. the invariant is trivially maintained and 3. the variant decreases strictly with probability at least \(p(1 - p)\), which is nonzero since \(0 < p < 1\). (Let the least-clockwise processor in the shortest segment decide to pass while the most-clockwise processor decides to keep.) The conclusion of Lem. 7.1 gives us certain termination: that eventually only one processor contains tokens (negated guard), and that it has all \(N\) of them (invariant). \(\square\) \(^{15}\)Converting the processors’ arithmetic to \texttt{mod} 2 yields a scheme very close to Hermans’ original; in that case the number of processors must be odd so that \(N \mod 2\) and \(0 \mod 2\) can be distinguished on termination. Note that the use of a ring is not essential for correctness: in fact if each processor chooses probabilistically from all others where to pass its tokens (with a nonzero probability for each possible recipient), then termination is still certain — and in fact is easier to show than with a ring. The variant is just the number of processors holding tokens, and cannot increase; it decreases with nonzero probability \(p(1 - p)r\), where the extra factor \(r\) is the minimum probability over all processor pairs \(P, P'\) that \(P\) will choose \(P'\) as its recipient. That ‘chaotic’ scheme remains correct even if the processors execute asynchronously, provided their scheduling is starvation-free. The variant decreases with probability at least $p(1-p)^{n-1}$; it may increase. 10 Conclusion Our main results are Thm. 4.6 for total correctness of iterations when the termination condition is known, and Thm. 8.2 for termination with probability 1. With the examples of Sections 5 and 9 we have shown that probabilistic reasoning for partial correctness — on this scale at least — is not much more complex than standard reasoning. For total correctness it seems harder however to achieve simplification using grossly pessimistic variants (a familiar technique in the standard case). Our experience so far suggests that it is often necessary to use accurate bounds on the number of iterations remaining, and that can require intricate calculation. We do not have general rules for determining the termination condition when it is not 1; at this stage it seems those situations have to be handled by using the wp semantics to extract a recurrence relation to which standard probabilistic methods can then be applied. A promising approach however is to use (probabilistic) data refinement to extract not a recurrence relation but a simple(r) program, involving only the variant captured by a single variable. That program’s termination condition is equal to the original, but could perhaps be taken straight from the literature; one would then have access to a collection of termination ‘paradigms’. A longer term approach to probabilistic termination is to build a temporal logic over the probabilistic predicate transformers [17], generalising a similar construction by Morris [20] over standard transformers. The resulting properties are then very like those of Ben-Ari, Pnueli and Manna [1], and allow termination conditions to be determined for quite complicated programs using structured arguments in the style for example of UNITY [2]. Acknowledgements This work was carried out in collaboration with Annabelle McIver, Jeff Sanders and Karen Seidel; we are grateful for the support of the EPSRC. I benefited from the intellectual and social stimulation provided by six months’ stay at the Software Verification Research Center and the Department of Computer Science of the University of Queensland. The work has been improved by the attentions of IFIP Working Groups 2.1 and 2.3. References A Nondeterministic loops Here we show that the result of Thm. 4.6 extends to nondeterministic loops. The approach is to use Fact B.5 to replace the loop body by an appropriate deterministic refinement of it, for which we use the following lemma. **Notation A.1** The program *dloop* is defined \[ dloop := \text{do } G \rightarrow \text{det } \text{od}, \] for standard predicate \( G \) (the loop guard) and deterministic program \( \text{det} \) (the loop body). **Lemma A.2** For any *loop* and postcondition \( Q \) there is a *dloop* such that \( \text{body} \sqsubseteq \text{det} \) and \[ \text{wp.dloop}.Q \equiv \text{wp.loop}.Q. \] **Proof:** Define \( P := \text{wp.loop}.Q \), and use Fact B.5 to choose \( \text{det} \) so that \( \text{body} \sqsubseteq \text{det} \) and \[ \text{wp.body}.P \equiv \text{wp.det}.P. \quad (8) \] Then we have \[ \begin{align*} \overline{G} \sqcap Q \sqcup G \sqcap \text{wp.det}.P & \equiv \overline{G} \sqcap Q \sqcup G \sqcap \text{wp.body}.P \quad \text{by construction (8)} \\ & \equiv P, \quad \text{definition } P; \text{ refolding of iteration} \end{align*} \] so that \( P \) satisfies the (least) fixed-point equation for \( \text{wp.dloop}.Q \). Hence \( \text{wp.dloop}.Q \Rightarrow P \) and, from \( \text{loop} \sqsubseteq \text{dloop} \) and monotonicity, we have \[ \text{wp.dloop}.Q \equiv \text{wp.loop}.Q \] as required. With Lem. A.2 we have our theorem easily. **Theorem A.3** If \( I \) is a *wp*-invariant of *loop* and \( I \Rightarrow T \), then \[ I \Rightarrow \text{wp.loop}.(\overline{G} \sqcap I). \] **Proof:** Use Lem. A.2 to choose deterministic refinement \( \text{det} \) of \( \text{body} \) so that \[ \text{wp.dloop}.(\overline{G} \sqcap I) \equiv \text{wp.loop}.(\overline{G} \sqcap I), \] and observe that since \( \text{body} \sqsubseteq \text{det} \) we have \( I \) a *wp*-invariant of \( \text{dloop} \) also. The result is then immediate from Thm. 4.6. B Facts about probabilistic \( wp \) and \( wlp \) Proofs of these facts are to be found in other publications of the Group [22]. **Fact B.1** For standard program \( prog \) and standard postcondition \( Q \) we have \[ wp.\ prog.\ Q \sqcap wp.\ prog.\ 1 \implies wp.\ prog.\ Q . \] **Fact B.2** *sub-distributivity of \&* For program \( prog \) and postconditions \( Q_0, Q_1 \) we have \[ wlp.\ prog.\ Q_0 \& wp.\ prog.\ Q_1 \implies wp.\ prog.\ (Q_0 \& Q_1) . \] We observe as a special case that \[ wp.\ prog.\ Q_0 \& wp.\ prog.\ Q_1 \implies wp.\ prog.\ (Q_0 \& Q_1) , \] since for any \( prog \) and \( Q \) we have \( wp.\ prog.\ Q \implies wlp.\ prog.\ Q \). If \( wlog \) both \( Q_0 \) and \( wp.\ prog.\ Q_0 \) are standard, the above reduces further to sub-distributivity of \( \sqcap \). **Fact B.3** *sub-distributivity of +* For any program \( prog \) and postconditions \( Q_0, Q_1 \) we have \[ wp.\ prog.\ Q_0 + wlp.\ prog.\ Q_1 \implies wp.\ prog.\ (Q_0 + Q_1) , \] with equality when \( prog \) is deterministic. **Fact B.4** For any program \( prog \) we have \( wlp.\ prog.\ 1 \equiv 1 \). **Fact B.5** For any program \( prog \) and postcondition \( Q \) there is a deterministic refinement of it — a deterministic \( det \) with \( prog \sqsubseteq det \) — such that \[ wp.\ prog.\ Q \equiv wp.\ det.\ Q . \] **Fact B.6** *scaling* For any program \( prog \), postcondition \( Q \) and constant \( c \) with \( 0 \leq c \leq 1 \), we have \[ c(wp.\ prog.\ Q) \equiv wp.\ prog.\ (cQ) . \] **Fact B.7** *monotonicity* For any program \( prog \) and postconditions \( Q, Q' \) with \( Q \implies Q' \) we have \[ wp.\ prog.\ Q \implies wp.\ prog.\ Q' \] and \( wlp.\ prog.\ Q \implies wlp.\ prog.\ Q' \). **Fact B.8** For any program \( prog \) and postcondition \( Q \) we have \[ \lfloor wp.\ prog.\ Q \rfloor \implies wp.\ prog.\ \lfloor Q \rfloor . \]
{"Source-Url": "https://cs.ioc.ee/ewscs/2010/morgan/morgan-refine96.pdf", "len_cl100k_base": 14873, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 81586, "total-output-tokens": 17831, "length": "2e13", "weborganizer": {"__label__adult": 0.0005106925964355469, "__label__art_design": 0.0005145072937011719, "__label__crime_law": 0.0005650520324707031, "__label__education_jobs": 0.00125885009765625, "__label__entertainment": 0.00012159347534179688, "__label__fashion_beauty": 0.00023281574249267575, "__label__finance_business": 0.00040340423583984375, "__label__food_dining": 0.0005922317504882812, "__label__games": 0.0016422271728515625, "__label__hardware": 0.0017385482788085938, "__label__health": 0.0011148452758789062, "__label__history": 0.0004069805145263672, "__label__home_hobbies": 0.0001962184906005859, "__label__industrial": 0.0008654594421386719, "__label__literature": 0.0006356239318847656, "__label__politics": 0.0004825592041015625, "__label__religion": 0.0008664131164550781, "__label__science_tech": 0.1436767578125, "__label__social_life": 0.00012117624282836914, "__label__software": 0.006069183349609375, "__label__software_dev": 0.83642578125, "__label__sports_fitness": 0.00040841102600097656, "__label__transportation": 0.0009632110595703124, "__label__travel": 0.00024437904357910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53899, 0.01778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53899, 0.30579]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53899, 0.83577]], "google_gemma-3-12b-it_contains_pii": [[0, 2562, false], [2562, 6133, null], [6133, 9339, null], [9339, 11788, null], [11788, 14344, null], [14344, 16665, null], [16665, 19040, null], [19040, 21534, null], [21534, 24077, null], [24077, 26525, null], [26525, 29406, null], [29406, 32410, null], [32410, 35172, null], [35172, 37281, null], [37281, 39851, null], [39851, 42022, null], [42022, 42090, null], [42090, 44655, null], [44655, 44736, null], [44736, 47447, null], [47447, 49906, null], [49906, 52007, null], [52007, 53899, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2562, true], [2562, 6133, null], [6133, 9339, null], [9339, 11788, null], [11788, 14344, null], [14344, 16665, null], [16665, 19040, null], [19040, 21534, null], [21534, 24077, null], [24077, 26525, null], [26525, 29406, null], [29406, 32410, null], [32410, 35172, null], [35172, 37281, null], [37281, 39851, null], [39851, 42022, null], [42022, 42090, null], [42090, 44655, null], [44655, 44736, null], [44736, 47447, null], [47447, 49906, null], [49906, 52007, null], [52007, 53899, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53899, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53899, null]], "pdf_page_numbers": [[0, 2562, 1], [2562, 6133, 2], [6133, 9339, 3], [9339, 11788, 4], [11788, 14344, 5], [14344, 16665, 6], [16665, 19040, 7], [19040, 21534, 8], [21534, 24077, 9], [24077, 26525, 10], [26525, 29406, 11], [29406, 32410, 12], [32410, 35172, 13], [35172, 37281, 14], [37281, 39851, 15], [39851, 42022, 16], [42022, 42090, 17], [42090, 44655, 18], [44655, 44736, 19], [44736, 47447, 20], [47447, 49906, 21], [49906, 52007, 22], [52007, 53899, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53899, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
57b577dda513de821217c2dca07b74e09eaf4619
Fine-grained and Accurate Source Code Differencing Jean-Rémy Falleri, Floréal Morandat, Xavier Blanc, Matias Martinez, Martin Monperrus To cite this version: HAL Id: hal-01054552 https://hal.archives-ouvertes.fr/hal-01054552 Submitted on 12 Sep 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Fine-grained and Accurate Source Code Differencing Jean-Rémy Falleri Univ. Bordeaux, LaBRI, UMR 5800 F-33400, Talence, France falleri@labri.fr Floréal Morandat Univ. Bordeaux, LaBRI, UMR 5800 F-33400, Talence, France fmorand@labri.fr Xavier Blanc Univ. Bordeaux, LaBRI, UMR 5800 F-33400, Talence, France xblanc@labri.fr Matias Martinez INRIA and University of Lille, France matias.martinez@inria.fr Martin Monperrus INRIA and University of Lille, France martin.monperrus@inria.fr ABSTRACT At the heart of software evolution is a sequence of edit actions, called an edit script, made to a source code file. Since software systems are stored version by version, the edit script has to be computed from these versions, which is known as a complex task. Existing approaches usually compute edit scripts at the text granularity with only add line and delete line actions. However, inferring syntactic changes from such an edit script is hard. Since moving code is a frequent action performed when editing code and it should also be taken into account. In this paper, we tackle these issues by introducing an algorithm computing edit scripts at the abstract syntax tree granularity including move actions. Our objective is to compute edit scripts that are short and close to the original developer intent. Our algorithm is implemented in a freely-available and extensible tool that has been intensively validated. Categories and Subject Descriptors: D.2.3 [Software Engineering]: Coding Tools and Techniques General Terms: Algorithms, Experimentation Keywords: Software evolution, Program comprehension, Tree differencing, AST. 1. INTRODUCTION The first law of software evolution states that almost all software systems have to evolve to be satisfactory [19]. Since this law was formulated, many studies have been performed to better understand how software systems evolve, and forms what is called the software evolution research field [21]. There is global software evolution (e.g. evolution of requirements, of execution environments, ...) and local software evolution (evolution of source code files). In this paper, we focus on the latter, that is on understanding how source code files evolve. In particular, we focus on edit scripts, that are sequences of edit actions made to a source code file. Usually, since software is stored in version control systems, edit scripts are computed between two versions of a same file. The goal of an edit script is to accurately reflect the actual change that has been performed on a file. Edit scripts are used by developers on a daily basis. For example, the Unix diff tool takes as input two versions of a source code file and performs the Myers algorithm [24] at the text line granularity and returns an edit script indicating which lines have been added or deleted. However, the limitations of diff are twofold. First, it only computes additions and deletions and does not consider other kinds of edit actions such as update and move. Second, it works at a granularity (the text line) that is both coarse grain and not aligned with the source code structure: the abstract syntax tree. To overcome this main limitation, there are algorithms that can work at the abstract syntax tree (AST) level [13]. The main advantage in using the AST granularity is that the edit script directly refers to the structure of the code. For instance, if an edit action is the addition of a new function node, it clearly means that a new function has been added in the code. Despite several key contributions (e.g. [13]), the problem of computing AST edit scripts is still open, with two main challenges: handling move actions, and scaling to fine-grained ASTs with thousands of nodes\(^1\). This is where this paper makes a contribution. To design our novel algorithm, we take the viewpoint of the developer: she is never interested in the theoretical shortest edit script. She is rather interested in having an edit script that reflects well the actual changes that happened. Thus our objective is not to find the shortest sequence of actions between two versions, but a sequence that reflects well the developer intent. Consequently, we devise an algorithm based on heuristics that contain pragmatic rules on what a good edit script is, and as importantly, that is efficient and scales to large ASTs. This algorithm has been implemented within a freely-available and extensible tool\(^2\). To sum up, our contributions are: - a novel efficient AST differencing algorithm that takes into account move actions, and its implementation; \(^1\)The best known algorithm with add, delete and update actions has a \(O(n^3)\) time complexity with \(n\) being the number of nodes of the AST [27]. Computing the minimum edit script that can include move node actions is known to be NP-hard [4]. \(^2\)github.com/jrfaller/gumtree • an automated evaluation of the implementation performances on real data; • a manual evaluation of the results of the algorithm through the manual assessment of 144 differencing scenarios; • a large-scale automated evaluation of 12,792 differencing scenarios showing that the results of our algorithm are more accurate than the related work, even on fine-grained ASTs. The rest of this paper is structured as follows: Section 2 presents what is AST differencing. Section 3 describes our new AST differencing algorithm. Section 4 presents our tool that implement this new algorithm and its performances. Section 5 presents an empirical evaluation of our tool. Section 6 presents the related work. Finally, Section 7 concludes and presents the future work. 2. AST DIFFERENCING Prior to presenting AST differencing, we briefly introduce the main concepts defining the AST structure. We consider that an AST is a labeled ordered rooted tree where nodes may have a string value. Labels of nodes correspond to the name of their production rule in the grammar, i.e., they encode the structure. Values of the nodes correspond to the actual tokens in the code. More formally, let $T$ be an AST. $T$ is a set of nodes. A tree $T$ has one node that is root (denoted by $\text{root}(T)$). Each node $t \in T$ has a parent $p \in T \cup \emptyset$. The only node that has $\emptyset$ for parent is the root. The parent of a node is denoted by $\text{parent}(t)$. Each node $t \in T$ has a sequence of children ($\text{children}(t)$). Each node has a label in an alphabet $l \in \Sigma$ ($\text{label}(t) = l$). Each node has a string value $v \in \text{String}$ that is possibly empty ($\text{value}(t) = v \lor \epsilon$). As an example, we consider a simple Java source code and its corresponding AST (see the bottom-left of Figure 1). The AST of this Java source code contains 19 nodes that correspond to the structure of the Java programming language. Each node of the AST has therefore a label, which maps to structural elements of the source code (such as $\text{MethodDeclaration}$ or $\text{NumberLiteral}$), and a value that corresponds to the actual tokens in the code (such as $\text{NumberLiteral}$ associated to 1). Some values do not encode information and are therefore discarded, for instance $\text{MethodDeclaration}$ has no interesting token associated to it and thus no value. ASTs can have different granularities, a node can encode a whole instruction or finer grain expressions. We believe than its corresponding AST (see the bottom-left of Figure 1). The following edit actions: - $\text{transform}(t, t, l, v)$ adds a new node $t$ in the AST. If $t_p$ is not null and $i$ is specified then $t$ is the $i^{th}$ child of $t_p$. Otherwise $t$ is the new root node and has the previous root node as its only child. Finally, $l$ is the label of $t$ and $v$ is the value of $t$. - $\text{delete}(t)$ removes a leaf node of the AST. - $\text{move}(t, t_p, i)$ moves a node $t$ and make it the $i^{th}$ child of $t_p$. Note that all children of $t$ are moved as well, therefore this actions moves a whole subtree. As there are many possible edit scripts that perform the same transformation, the edit script quality depends on its length: the shorter the transformation, the better the quality. Note that finding the shortest transformation is NP-hard when the move action is taken into consideration. We then consider in this paper that the AST differencing problem inputs two ASTs and aims at identifying a short edit script $\sigma$ of edit actions (including move) that transforms a first AST (named $t_1$) into a second one (name $t_2$). The existing algorithms that perform such an AST differencing use heuristics to return a short edit script $\sigma$. Moreover, they usually follow a two steps process. First, they establish mappings (pairs of nodes) between the similar nodes of the two ASTs. There are two constraints for these mappings: a given node can only belong to one mapping, and mappings involve two nodes with identical labels. Second, based on these mappings, they deduce the edit script that must be performed on the first AST to obtain the second one. The first step is the most crucial one because quadratic optimal algorithms exist for the second step [6, 15]. In the next section, we will present a new algorithm to compute mappings between two ASTs. 3. THE GUMTREE ALGORITHM As explained in the previous section, AST differencing algorithms work in two steps: establishing mappings then deducing an edit script. Since an optimal and quadratic algorithm has already been developed for the second step [6], we only explain in this section how we look for the mappings between two ASTs. The output of this algorithm can be then used by the algorithm of Chawathe et al. [6] to compute the actual edit script. Our algorithm to compute the mappings between two ASTs is composed of two successive phases: 1. A greedy top-down algorithm to find isomorphic subtrees of decreasing height. Mappings are established between the nodes of these isomorphic subtrees. They are called anchors mappings. 2. A bottom-up algorithm where two nodes match (called a container mapping) if their descendants (children of the nodes, and their children, and so on) include a large number of common anchors. When two nodes matches, we finally apply an optimal algorithm to search for additional mappings (called recovery mappings) among their descendants. This algorithm is inspired by the way developers manually look at changes between to files. First they search for the biggest unmodified pieces of code. Then they deduce which container of code can be mapped together. Finally they look at precise differences in what is leftover in each container. To better illustrate our algorithm, we introduce the example shown in Figure 1. 3.1 Top-down Phase The first step of GumTree is a top-down greedy search of the greatest isomorphic subtrees between $T_1$ and $T_2$. Before explaining how we proceed, we introduce the notion of height in a tree. The height of a node $t \in T$ is defined as: 1) for a leaf node $t$, $\text{height}(t) = 1$ and 2) for an internal node $t$, $\text{height}(t) = \max(\{\text{height}(c)|c \in \text{children}(t)\}) + 1$. The algorithm uses an auxiliary data structure called height-indexed priority list. This list contains a sequence of nodes, ordered by decreasing height. The following functions are associated with this datastructure. $\text{push}(t, l)$ inserts the node $t$ in the list $l$. $\text{peekMax}(l)$ returns the greatest height of the list. $\text{pop}(l)$ returns and removes from $l$ the set of all nodes of $l$ having a height equal to $\text{peekMax}(l)$. $\text{open}(t, l)$ inserts all the children of $t$ into $l$. We also define the dice function that measure the ratio of common descendants between two nodes given a set of mappings $M$ as $\text{dice}(t_1, t_2, M) = \frac{2x(|t_1 \cap t_2|)|t_1 \cap t_2|}{|t_1| + |t_2|}$, with $s(t_i)$ being the set of the descendants of node $t_i$. The dice coefficient ranges in the $[0, 1]$ real interval, a value of 1 indicates that the set of descendants of $t_1$ is the same as the set of descendants of $t_2$. The algorithm of the top-down phase of GumTree is shown in Algorithm 1. In this algorithm, we map the common subtrees of $T_1$ and $T_2$ with the greatest height possible. The principle is to start with the roots (since they have the greatest heights) and to check if they are isomorphic. If they are not, their children are then tested. A node is matched as soon as an isomorphic node is found in the other tree. When a given node can be matched to several nodes, all the potential mappings are kept in a dedicated candidate mappings list. This list is processed after all nodes that are uniquely matched have been processed; those nodes being directly placed into the mappings set. The algorithm considers only nodes with a height greater than $\text{minHeight}$. To process the candidate mappings, we use the dice function on the parents of each candidate mapping. The values of this function are used to sort the candidate mappings list, mappings with greater values being first. Then, until the candidate mappings list is empty, we remove the first element, add it in the mappings set, and we remove from the candidate mappings list the mappings involving a node of this mapping. On the sample trees of Figure 1 with a $\text{minHeight} = 2$, Algorithm 1 finds the mappings shown with dashed lines. 3.2 Bottom-up Phase Algorithm 2 shows the bottom-up phase, where the mappings produced during the top-down phase are taken as input. First we look for container mappings, that are established when two nodes have a significant number of matching descendants. For each container mapping found, we look for recovery mappings, that are searched among the still unmatched descendants of the mapping’s nodes. To find the container mappings, the nodes of $T_1$ are processed in post-order. For each unmatched non-leaf node of $T_1$, we extract a list of candidate nodes from $T_2$. A node $c \in T_2$ is a candidate for $t_1$ if $\text{label}(t_1) = \text{label}(c)$, $c$ is unmatched, and $t_1$ and $c$ have some matching descendants. We then select the candidate $t_2 \in T_2$ with the greatest dice($t_1, t_2, M$) value. If Algorithm 1: The algorithm of the top-down phase. Data: A source tree $T_1$ and a destination tree $T_2$, a minimum height $\text{minHeight}$, two empty height-indexed priority lists $L_1$ and $L_2$, an empty list $A$ of candidate mappings, and an empty set of mappings $M$ Result: The set of mappings $M$ 1. $\text{push(root}(T_1), L_1)$ 2. $\text{push(root}(T_2), L_2)$ 3. while $\text{peekMax}(L_1) \neq \text{peekMax}(L_2) \land \text{minHeight}$ do 4. if $\text{peekMax}(L_1) > \text{peekMax}(L_2)$ then 5. foreach $t \in \text{pop}(L_1)$ do $\text{open}(t, L_1)$; 6. else 7. foreach $t \in \text{pop}(L_2)$ do $\text{open}(t, L_2)$; 8. else 9. $H_1 \leftarrow \text{pop}(L_1)$; 10. $H_2 \leftarrow \text{pop}(L_2)$; 11. foreach $\langle t_1, t_2 \rangle \in H_1 \times H_2$ do 12. if isomorphic$(t_1, t_2)$ then 13. if $\exists x \in T_2 \mid \text{isomorphic}(t_1, x) \land t_1 \neq t_2$ or $\exists x \in T_1 \mid \text{isomorphic}(x, t_2) \land t_1 \neq t_2$ then 14. $\text{add}(A, \langle t_1, t_2 \rangle)$; 15. else 16. $\text{add all pairs of isomorphic nodes of } s(t_1)$ and $s(t_2)$ to $M$; 17. $\text{foreach } t_1 \in H_1 \mid \langle t_1, t_2 \rangle \notin A \cup M \text{ do } \text{open}(t_1, L_1)$; 18. $\text{foreach } t_2 \in H_2 \mid \langle t_2 \rangle \notin A \cup M \text{ do } \text{open}(t_2, L_2)$; 19. sort $\langle t_1, t_2 \rangle \in A$ using $\text{dice}(\text{parent}(t_1), \text{parent}(t_2), M)$; 20. while $\text{size}(A) > 0$ do 21. $\langle t_1, t_2 \rangle \leftarrow \text{remove}(A, 0)$; 22. $\text{add all pairs of isomorphic nodes of } s(t_1)$ and $s(t_2)$ to $M$; 23. $A \leftarrow A \setminus \{\langle t_1, t_2 \rangle \in A\}$; 24. $A \leftarrow A \setminus \{\langle t_2 \rangle \in A\}$; $dice(t_1, t_2, M) > \text{minDice}$, $t_1$ and $t_2$ are matched together. To search for additional mappings between the descendants of $t_1$ and $t_2$, we first remove their matched descendants, and if both resulting subtrees have a size smaller than $\text{maxSize}$, we apply an algorithm denoted $\text{opt}$ that finds a shortest edit script without move actions. In our implementation we use the RTED algorithm [27]. The mappings induced from this edit script are added in $M$ if they involve nodes with identical labels. On the sample trees of Figure 1, with $\text{minDice} = 0.2$, Algorithm 2 finds the container mappings shown using short-dotted lines. From these container mappings, with $\text{maxSize} = 100$, several recovery mappings are found, shown with alternate-dotted lines. Finally, the edit script generated from these mappings is as follows (nodes $a$, $b$ and $c$ are shown in Figure 1, nodes $t$ are new nodes): - $\text{add}(t_1, a, 1, \text{ReturnStatement}, c)$ - $\text{add}(t_2, t_1, 0, \text{StringLiteral, Bar})$ - $\text{add}(t_3, a, 2, \text{IStatement, c})$ - $\text{add}(t_4, t_3, 0, \text{InfixExpression, ==})$ - $\text{add}(t_5, t_4, 0, \text{SimpleName, i})$ - $\text{add}(t_6, t_4, 1, \text{PrefixExpression, -})$ - $\text{add}(t_7, t_6, 0, \text{NumberLiteral, 1})$ - $\text{move}(b, t_3, 1)$ - $\text{updateValue}(c, \text{private})$ We recommend the following values for the three thresholds of our algorithm. We recommend $\text{minHeight} = 2$ to avoid single identifiers to match everywhere. $\text{maxSize}$ is used in the recovery part of Algorithm 2 that can trigger a cubic algorithm. To avoid long computation times we recommend to use $\text{maxSize} = 100$. Finally under 50% of common nodes, two container nodes are probably different. Therefore we recommend using $\text{minDice} = 0.5$. 3.3 Complexity Analysis Our algorithm has a worst-case complexity of $O(n^2)$ where $n = \max(|T_1|, |T_2|)$. Indeed, Algorithm 1 performs in the worst-case a Cartesian product of nodes with identical heights. Since the isomorphism test we use is in $O(1)$ thanks to hashcodes proposed in [7], the whole algorithm is $O(n^2)$. Moreover with real ASTs this worst-case is very unlikely to happen. Algorithm 2 also performs a Cartesian product of unmatched nodes in the worst-case. This operations is also $O(n^2)$ because all sub-operations are bounded, even the cubic algorithm $\text{opt}$ which is only applied on trees smaller than a fixed size. Finally the algorithm that computes the edit script from the mappings, described in [6], also has an $O(n^2)$ worst-case complexity. 4. TOOL The algorithm described in the previous section has been implemented in a freely-available and extensible tool. AST differencing requires parsers (that produce the AST representation) to support a given programming language. This is clearly a constraint, since new languages do not work out of the box. Another interesting challenge faced by such a tool is that it is used by different actors with different expectations, such as a developer that wants a neat graphical display or another researcher that wants the results in a structured format that can be processed automatically. In this section we present our AST differencing tool, that allows to integrate new programming languages, differencing algorithms, and ways of providing results. 4.1 Architecture Our tool uses a pipe and filter architecture shown in Figure 2. Two input files are transformed into two ASTs by a parser. Since parser is an abstract module, several concrete Algorithm 2: The algorithm of the bottom-up phase. Data: Two trees $T_1$ and $T_2$, a set $M$ of mappings (resulting from the top-down phase), a threshold $\text{minDice}$ and a maximum tree size $\text{maxSize}$ Result: The set of mappings $M$ 1. foreach $t_1 \in T_1 \mid t_1$ is not matched $\land t_1$ has matched children, in post-order do 2. $t_2 \leftarrow \text{candidate}(t_1, M)$; 3. if $t_2 \neq \text{null}$ and $\text{dice}(t_1, t_2, M)$ > $\text{minDice}$ then 4. $M \leftarrow M \cup \{t_1, t_2\}$; 5. if $\max(|s(t_1)|, |s(t_2)|) < \text{maxSize}$ then 6. $R \leftarrow \text{opt}(t_1, t_2)$; 7. foreach $\langle t_4, t_5 \rangle \in R$ do 8. if $t_4, t_5$ not already mapped and $\text{label}(t_4) = \text{label}(t_5)$ then 9. $M \leftarrow M \cup \{t_4, t_5\}$; implementations can be furnished (such as JAVA or C). These two ASTs are then given to an abstract mappings module that computes as output a set of mappings. Since this module is also abstract, several concrete algorithms (such as GumTree or ChangeDistiller [13]) can be provided. Finally this set of mappings is given to an actions module that computes the actual edit script. The input files, ASTs, mappings, and edit script are finally given to an abstract output module. Since this module is abstract, several outputs can be provided (e.g., XML, JSON, ...). Note that all the data structures are given to the output module; it can therefore operate on any of them (for instance it can produce the XML of an AST or of an edit script). Using this architecture, we have been able to integrate the JAVA (using the Eclipse JDT parser), JAVASCRIPT (using the Mozilla Rhino parser), R (using the FastR parser [17]) and C (using the Coccinelle parser [26]) programming languages. We have also integrated the GumTree, ChangeDistiller [13], XYDiff [8] and RTED [27] algorithms. Finally we can produce the following outputs: a graphviz representation of an AST, a XML representation of an AST, a web-based view of an AST, an edit script (shown in Figure 3), and a XML representation of an edit script. 4.2 Runtime Performances In this section, we want to assess the runtime performances of our tool on real data. As explained in the previous section, our tool applies the differencing algorithm on ASTs parsed from two versions of a source code file. We have integrated several parsers into our tools. We use the JAVA and JAVASCRIPT parsers in this section. To gather representative data to assess our tool, we selected arbitrarily two mature, popular and medium-to-large sized projects. For the JAVA language, we use Jenkins (a continuous integration server) and for JAVASCRIPT we use JQuery (a DOM manipulation library). We arbitrarily selected a complete release of each project, and extracted each file modification performed in the commits corresponding to this release. In Jenkins, we use the release 1.509.4 → 1.532.2 where we extracted 1144 modifications. In JQuery, we use the revision 1.8.0 → 1.9.0 where we extracted 650 modifications. Each modification consists in a pair of files (previous version and next version). They have been extracted thanks to the Harmony platform [12]. In this performance study, we want to assess two important aspects: running time and memory consumption. We use a MacBook Pro retina with a 2.7GHz Intel Core i7 with 16 Gb of RAM. To have reference measures, we use three other tools in addition to our tool. The complete list of tools we use is: - A classical text diff tool, that computes an edit script with add and delete actions on text lines. As explained in Section 6, this tool is very fast and therefore represents the lower bound for a code differencing algorithm. In our experiment, we use the Google implementation.\(^3\) - The parser included in GumTree which only parse the two files involved in the modification without applying AST differencing algorithms. As parsing the files is mandatory to perform AST differencing, it represents the lower bound for an AST differencing algorithm. In our experiment, we use Eclipse JDT parser to parse JAVA files, and Mozilla Rhino parser to parse JAVASCRIPT files. - The GumTree algorithm (including parsing), with the following thresholds: minHeight = 2, minDice = 0.5 and maxHeight = 100. - The RTED algorithm (including parsing), that computes an edit script on an AST with add, update and delete actions. As explained in Section 6, RTED has a cubic worst-case complexity (\(n^3\)). Therefore it represents an upper bound for AST differencing. In our experiment we use the implementation provided by Pawlik et al.\(^4\) in our framework. We only compare GumTree to text diff and RTED because we have re-implemented the other algorithms included in our tool by following the description of the articles, but with no particular care for optimization. Therefore, reporting memory consumption or running times for these algorithms would not be fair. For the memory consumption, we ensure that the tools can run using 4Gb of RAM, a common amount of memory in modern computers. To that extent, we use a JAVA virtual machine bound to 4Gb of memory. We run each tool on each modification, and count the number of modifications leading to an out of memory error. In this experiment the only tool that underwent out of memory errors is RTED with 82 errors. \(^{\text{3}}\)code.google.com/p/google-diff-match-patch \(^{\text{4}}\)www.inf.unibz.it/dia/projects/tree-edit-distance (around 5% of the modifications). Even though this number is not so high, it still shows that the complexity of RTED lead to a very expensive memory consumption in some cases. For the running time we perform two experiments. In the first experiment, we investigate if the tools are capable of computing an edit script of a modification in less than 10 seconds. After 10 seconds, we believe that the tools will not be used interactively by developers. To that extent, we run each tool on each modification and count the number of cases where the execution lasted more than 10 seconds. In this experiment, only RTED underwent such cases, 206 times (around 12% of the cases with no out of memory error). Therefore, on our data, RTED is not capable of computing an edit script for around 17% of the cases, which is a large number of cases. It clearly shows that the complexity of this algorithm is not suitable to real data. In the second experiment, we compare the running times of the tools. To compute the running times, we compute the edit scripts ten times for each modification, and we retain the median of these values. To avoid noise in the measures, we ensure that the JAVA virtual machine is hot by running each algorithm a hundred times on a random modification, i.e., that no more dynamic loading is involved and that the JIT compiler has compiled and installed the code corresponding to hot-spots. We also pre-load all files involved in the modifications to avoid IO latencies. To be able to compare the tools on the same dataset, we discarded all the modifications that led to a out of memory error or an execution timeout (execution lasting more than 10 seconds) for at least one tool. To present the values, we use the running time of text diff as a reference value, since it is the faster existing tool. Therefore for each modification, we divide the running time of Parsing, GumTree and RTED tools by the running time of the text diff tool. This ratio represent the number of times that the tool is slower than performing a text differencing. We then present the boxplots of the distributions of these resulting ratios. Figure 4 shows the results of the second experiment. The first interesting conclusion is that just parsing the files is significantly longer than performing a text differencing: the median of parsing time ratios is 10. Additionally, we see that computing an edit script with GumTree is only slightly slower than just parsing the files (median at 18 for Jenkins and 30 for JQuery). The difference between Jenkins and JQuery medians indicates that JAVA SCRIPT ASTs are likely to contain more nodes than JAVA ASTs. Finally we see that RTED is significantly slower than just parsing the files (median at 298 for Jenkins and 2 654 for Jquery). The difference between the two medians is also observed for the RTED tool. As a conclusion, we clearly see that text diff tool is by far the fastest. However, performing AST differencing with GumTree induces only a small overhead over parsing the files. It means that our algorithm is fast and can therefore be applied on real data. The mean running times of GumTree are 20 milliseconds on Jenkins and 74 milliseconds on JQuery. Our experiments also confirm that using RTED on real data induces a huge overhead compared to text diff. 5. EVALUATION We now present the empirical evaluation of GumTree. Our goal is to answer the following research questions: RQ1) Does GumTree produce tree differences that are correct and better than Unix diff (5.1)? RQ2) Does GumTree maximize the number of mappings and minimize the edit script size compared to the existing algorithms (5.2)? RQ3) Does GumTree detect move operations better than ChangeDistiller (5.3)? We discuss the threats to the validity of our results in 5.4. 5.1 Manual Evaluation First, we consider the viewpoint of the developer. For her, what matters is that the computed edit script is good answers to the following questions: Is the GumTree output correct? and Which technique yield the most understandable differencing information: GumTree or diff?. The experiment consists in the manual evaluation of file differences, i.e. on the manual assessment of a file pair difference (the difference between the version after the commit) these file differences are computed using two techniques; GumTree and a state of the art text differencing tool1. We will refer to the text differencing tool as diff, and to GumTree as GT. For each file pair of a dataset, the outputs from both approaches are given to a human evaluator. He/she compares both outputs and then answers to the following questions: Is the GumTree output correct? and Which technique yield the most understandable differencing information: GumTree or diff? For example, the revision 1.15 of the class ARecord of the DNSJava project6 introduces a new parameter (int index) in the method called rrToWire. The diff output is a source code hunk pair: the left hunk is composed of the line cor- --- 1. mergely.com 2. sf.net/projects/dnsjava 5.1.2 Experiment Setup A commit is composed of a set of modified files. After a commit, each modified file is said to have a new Revision. In our experiment, each file pair corresponds to consecutive file revisions. We used stratified sampling to randomly select revisions from the software history of 16 open source projects (from [23]). We only consider revisions with few source code changes (those revisions for which the ChangeDistiller differencing algorithm states that there is only one single source code change). We pick 10 items (file pairs) per project (if 10 simple revisions are found otherwise less). In total, the dataset contains 144 transactions. Then, we create an evaluation item for each pair of files of the evaluation dataset. An evaluation item contains: the GumTree output between the revision pair of the transaction, the diff output between the same pair, and the commit message associated with the transaction. The diff output shows two files (i.e. called left and right) and highlights the changes made per line. In particular, it highlights the lines deleted from the left file, the lines added from right file. Note that we have configured diff to discard whitespaces. The GumTree output (shown in Figure 1) highlights the added, deleted, updated and moved AST nodes. The commit message describes the intention of the change, it sometimes help to meaningfully assess the relevance of both differencing algorithms. The 144 evaluation items were independently evaluated by three authors of this paper called the raters. All 3 raters evaluated all the edit scripts of 144 file pairs at the AST and line level (i.e. 288 outputs). This makes a total of $3 \times 2 \times 144 = 864$ ratings. The rater has to answer to the following questions: - **Question #1:** Does GumTree do a good job? The possible answers are: 1. GumTree does a good job: it helps to understand the change. 2. GumTree does a bad job. - **Question #2:** Is GumTree better than diff? The possible answers are: 1. GumTree is better. 2. diff is better. 3. GumTree is equivalent to diff. Optionally, the rater could write a comment to explain his decision. Those comments are used to identify buggy or corner cases where GumTree could be improved. <table> <thead> <tr> <th></th> <th>Full (3/3)</th> <th>Majority (2/3)</th> </tr> </thead> <tbody> <tr> <td>#1</td> <td>GT does good job</td> <td>122 137</td> </tr> <tr> <td></td> <td>GT does not good job</td> <td>3 3</td> </tr> <tr> <td></td> <td>Neutral</td> <td>0 1</td> </tr> <tr> <td>#2</td> <td>GT better</td> <td>28 66</td> </tr> <tr> <td></td> <td>Diff better</td> <td>3 12</td> </tr> <tr> <td></td> <td>Equivalent</td> <td>45 61</td> </tr> </tbody> </table> Table 1: Agreements of the manual inspection of the 144 transactions by three raters for Question #1 (top) and Question #2 (bottom). 5.1.3 Experiment Result Table 1 (top) presents the number of agreements for the first question. Let us consider question #1, the three raters fully agreed for 122/144 (84.7%) file pairs to say that GumTree does a good job in explaining the change. If we consider the majority (at least 2/3 agree), it has been assessed that GumTree has a good output for 137/144 file pairs (95.1%). Table 1 (bottom) presents the number of agreements for the second question. In 28/144 (19.4%) evaluation items, there was a full agreement to say that GumTree better highlights the changes between two files. In 45/144 (31%) items the raters fully agreed the GumTree’s output is as good as the one of diff to explain the change. This shows that intuitively, GumTree is a tool that has added value compared to diff. Beyond those raw numbers let us now measure the statistical level of agreement. 5.1.4 Statistics Let us assume that $p_i$ measures the degree of agreement for a single item (in our case in $\{1/3, 2/3, 1\}$). The overall agreement $\bar{P}$ [9] is the average over $p_i$, where in $i = \{1, \ldots, 144\}$. The coefficient $\kappa$ (Kappa) [9, 16] measures the confidence in the agreement level by removing the chance factor\(^7\). 1. For Question #1: We have $\bar{P} = 0.905$. Using the scale introduced by [18], this value means there is an *almost perfect agreement*. The $\kappa$ degree of agreement in our study is 0.321, a value distant from the critical value, that is 0. The null hypothesis is rejected, the observed agreement was not due to chance. 2. For Question #2: We have $\bar{P} = 0.674$. Using the mentioned scale, this value means there is a *substantial overall agreement between the rates*. The $\kappa$ degree of agreement in our study is 0.426, far higher that the critical value. The null hypothesis is rejected, the observed agreement was not due to chance. 5.1.5 Conclusion The manual rating of 144 revisions by 3 independent raters shows that 1) GumTree can be used to compare two Java files in order to understand the essence of the change and 2) its output is sometimes more understandable than the one from diff. There is a statistically significant level of agreement between raters for both results. \(^7\)Some degree of agreement is expected when the ratings are purely random [9, 16]. 5.2 Automatic Evaluation We are now confident that GumTree is good from the viewpoint of a human developer. We now assess whether GumTree maximizes the number of mappings and minimizes the edit script. Compared to the previous manual evaluation, this evaluation is fully automatic. Consequently, it can be performed on a large scale. 5.2.1 Goal and measures The goal of this experiment is to measure the performance of tree differencing algorithms with respect to: 1. the number of mappings; 2. the edit script size; We compare GumTree against the two major differencing algorithms (as of today): ChangeDistiller [13] and RTED [27]. Other algorithms exist but they have less impact compared to those two (in terms of publication visibility or citations). For a description of ChangeDistiller and RTED, please refer to Section 6. As explained in Section 6, ChangeDistiller uses a simplified ASTs where the leaf nodes are code statements. Therefore, we compute the metrics for both simplified ASTs (as described in [13]) and raw ASTs generated by the Eclipse JDT parser. In the remainder of the section these granularities are called respectively CDG (ChangeDistiller granularity) and JDTG (Eclipse JDT granularity). Our motivation is to compare GumTree and ChangeDistiller algorithms, even on CDG ASTs: it would have been unfair to claim anything on ASTs that would be different from those for which ChangeDistiller is designed and optimized. Since the goal of GumTree is to work on fine-grained ASTs, we evaluate the performance metrics on this granularity as well. Finally, for the sake of comparison, we also evaluate RTED on fine grain AST, since that is the granularity GumTree is designed for. 5.2.2 Procedure The experiment consists in comparing source code file pairs using several AST differencing algorithms. We take a sample of 1000 revision pairs from 16 JAVA open source projects of the CVS-Vintage dataset [23]. For revision pairs, we create 4 tree representations (before and after the commit RTED we also evaluate ChangeDistiller and GumTree). The table presents the performance comparison of ChangeDistiller and GumTree at CDG granularity; while the middle part shows them at the JDTG granularity (finer grain, more AST nodes). Finally the lower part compares GumTree and RTED differencing algorithms at the JDTG granularity. Each cell of these tables presents the number of cases where an approach is better than the other for a given AST granularity and measure. We now analyze the experimental results by metric. **Mappings.** As explained in Section 2, finding the mappings is the most important step in an AST differencing algorithm. Finding more mappings increases the odds of deducing a short edit script. Considering the CDG granularity, in 4007 (31.32%) cases, GumTree matches more nodes than ChangeDistiller. Then, in 8243 cases (64.44%) both approaches find the same number of mappings. At the JDTG granularity (finer grain), in 8378 (65.49%) cases, GumTree matches more nodes than ChangeDistiller. In 4211 cases (32.92%) the number of mappings is the same. At both granularities, GumTree matches more nodes than ChangeDistiller. When comparing GT against RTED, in most of the cases 8752, (68.42%) the same number of mappings is found. However, GumTree finds more mappings, which is twice many better than the opposite 2806 (21.94%) vs 1234 (9.65%). **Edit Script Size.** Once the mappings are found, an edit script is computed. The length of the edit script is a proxy to the cognitive load for a developer to understand the essence of a commit. Hence, the goal is to minimize the size of edit scripts. Considering the CDG granularity, the size of edit scripts of GT and ChangeDistiller are the same in 7442 cases (58.18%). In 4938 (38.6%) cases, the edit script from ChangeDistiller is generally better. For the JDTG granularity, it is the same, ChangeDistiller often produces bigger scripts: in 10358 (80.97%) cases (versus 175 cases (1.37%) where it performs better than GumTree). The comparison between GT and RTED shows in most of the cases (59.25%) the edit script size is the same, and in 23.6% of the cases GT produces shorter edit script than RTED. According to our dataset, GumTree systematically produces shorter edit scripts, which is better to understand the meaning of a commit. <table> <thead> <tr> <th></th> <th>CDG</th> <th>JDTG</th> </tr> </thead> <tbody> <tr> <td></td> <td>Mappings</td> <td>ES size</td> </tr> <tr> <td></td> <td>GT better</td> <td>CD better</td> </tr> <tr> <td>CDG</td> <td>4007 (31.32%)</td> <td>542 (4.24%)</td> </tr> <tr> <td></td> <td>4938 (38.6%)</td> <td>412 (3.22%)</td> </tr> <tr> <td>JDTG</td> <td>8378 (65.49%)</td> <td>203 (1.59%)</td> </tr> <tr> <td></td> <td>10358 (80.97%)</td> <td>175 (1.37%)</td> </tr> </tbody> </table> Table 2: Number of cases where GumTree is better (resp. worse and equivalent) than ChangeDistiller (top, middle) and RTED (bottom) for 2 metrics, number of mappings and edit script size (ES size), at the CDG granularity (top) and JDTG granularity (middle, bottom). 5.3 Analysis of Move Actions Both GumTree and ChangeDistiller are able detect move node actions. This section presents an analysis of move actions found by the GumTree and ChangeDistiller matching algorithms. The goal of this experiment is to check how these algorithms detect move actions. The evaluation metric can not be an absolute number of move detected operations. The reason is twofold. On the one hand, one wants to maximize the number of moves (instead of having additions and deletions). On the other hand, one wants to minimize the number of spurious detected move actions that have nothing to do with the conceptual change. Consequently, we need a more subtle evaluation scenario. We propose to compare the number of moves by stratifying over the results of both algorithms. For instance, if ChangeDistiller is able to completely explain a commit with only move actions, GumTree should also find an edit script that is uniquely composed of moves. In this case, one can reasonable think that the edit script with the smallest number of moves is the best. So we compare the results for a number of different sub cases. 5.3.1 Procedure We analyze move actions from the differencing of Java file pairs of the dataset introduced in Section 5.2. We focus on move actions from edit scripts produced by ChangeDistiller (CD) and GumTree (GT). In this experiment we do not consider RTED because this algorithm does not identify move actions. We select those Java file pairs for which the edit script from ChangeDistiller or GumTree is only composed of move actions. From the initial dataset of 12 792 file pairs, this results in 130 elements. Then, to compare the edit scripts, we classify each pair of edit script (ChangeDistiller versus GumTree) in the following categories. 1. ChangeDistiller and GumTree produce only moves. Both have the same number of actions, i.e. they are equivalent (top-left). 2. ChangeDistiller and GumTree produce only moves, but in different numbers (top-left). (a) GumTree with more moves. (b) ChangeDistiller with more moves. 3. ChangeDistiller produce only move actions, and GumTree other actions which can include moves (top-right). 4. GumTree produce only move actions, and ChangeDistiller other actions (bottom-left). The analysis of the number of items in each category enables us to settle the question of the effectiveness of the detection of move actions. 5.3.2 Experiment Result The results are presented in Table 3. There are 77 comparisons for which both matching algorithms produce only move actions, 58 out of these 77 cases correspond to the case 1 where both algorithms exactly produce the same edit script (same number of moves). Then, there are 18 instances where ChangeDistiller has more move actions (case 2-b). It remains one case where GumTree produce more moves (case 2-a). This shows that GumTree edit scripts are more concise to describe move actions than the ones of ChangeDistiller. Moreover, there are 52 differencing scenarios where GumTree produces only move actions while ChangeDistiller produces other kinds of actions (case 4). In these cases ChangeDistiller has other actions (e.g. one node addition and one node deletion) in addition to a move. This means that GumTree is more precise to represent changes involving move actions. To sum up, according to our dataset, GumTree is better than ChangeDistiller at detecting move actions; it is both more concise and more precise. 5.4 Threats to Validity We now discuss the threats to the validity of our evaluation set up. We first discuss those that are specific to a research question and then the generic ones. For the manual analysis, the main threat to validity is that the raters are also authors of this paper. To reassure the reader, the evaluation dataset is made publicly available. For the comparative analysis of the number of mappings and move operations between different tools, the main threat is a potential bug in our implementation. In particular, we have re-implemented ChangeDistiller because the original implementation needs an Eclipse stack. Our new implementation may not reflect all specific implementation decisions of the original implementation or even introduce new bugs. For sake of future analysis and replication, our implementation of the competitors is in the same repository as GumTree. We also note that we have experimented with the original ChangeDistiller in several experiments (for instance [20]) and have confidence that our implementation reflects the original one. Our experiments only consider edit scripts of Java files. This is a threat to the external validity. Despite unlikely (our algorithm is independent of any Java specificity), we can not conclude on whether GumTree performs so well on other programming languages. Finally the three thresholds of GumTree have been fixed to the following values: \(\text{minHeight} = 2\), \(\text{maxSize} = 100\) and \(\text{minDice} = 0.5\). These values have been chosen according to our expertise. However, other values could perform differently. More experiments are needed to evaluate their impact on the runtime efficiency and the algorithm results. 6. RELATED WORK In this section we present the related work on code differencing, from text to graph granularity. Text Differencing. Computing differences between two versions of a source file is most commonly performed at the text line granularity [24, 22]. Within this granularity, the edit actions are insertion, deletion, and move actions. deletion or update of a text line. The more advanced algorithms are even capable of detecting moved lines [29, 5, 3]. The algorithms using this granularity are usually very fast and completely language independent, compared to our approach which requires a parser that slows down the whole process. The main issue with these algorithms is that they cannot compute fine-grained differences. Indeed in many languages (such as JavaScript or even Java) a text line can contain many programming constructs. Additionally the output of these algorithms is very difficult to process automatically since it composed of source code lines that might not be parsed, since they might be incomplete. It is therefore difficult to extract automatically the syntactic modifications using these approaches. Tree and AST Differencing. The tree differencing problem has been largely investigated when considering only the add node, delete node and update node actions [4]. For this problem, many optimal algorithms have been described in the literature. The fastest algorithms of this family [27] run in \( O(n^3) \), which can result in significantly long edit script computation time for large source code files. The other issue faced by these algorithms is their inability to uncover moved nodes, which is a frequent action in source code files. It results in unnecessarily big edit scripts which are hard to understand. When considering move node actions, the problem of finding the shortest edit script between two trees becomes NP-hard. However, several algorithms from the document engineering or software engineering research fields that use practical heuristics exist in the literature. One of the most famous is the algorithm of Chawathe et al. [6] that computes edit scripts (containing move actions) on trees representing LaTeX files. Unfortunately, this algorithm has constraints (acyclic labels and leaf nodes containing a lot of text) that do not apply to fine grained ASTs of general purpose programming languages. Several algorithms have also been designed specifically for XML documents [8, 1]. Unlike the algorithm of Chawathe et al., they do not have any particular constraint. However these algorithms put a particular emphasis on the edit script computation time because they are mostly used for automatic on-the-fly compression. Regarding our objective, the most important thing is to compute an edit script that reflects well the developer intent, computation time is only secondary. GumTree is inspired from the algorithm of Cobena et al. [8], because we apply a very similar first phase. The major difference is that they are not interested in having fine-grained differences since the differencing is computed only for compression purpose. Our algorithm is much more accurate since it also performs a second phase that increase the number of mappings found, and therefore produces shortest edit scripts at the expense of the running time. The most famous algorithm that works on ASTs is ChangeDistiller [13]. It is largely inspired by the one of Chawathe et al. but tuned to work better on ASTs. However, this algorithm is still based on the assumption that leaf nodes contain a significant amount of text. Therefore, the authors use simplified ASTs where the leaves are in fact code statements, rather than raw ASTs. Therefore ChangeDistiller will not compute fine-grained edit scripts on languages that can have a lot of elements in statements (such as JavaScript). The Diff/TS algorithm [15] is able to work on raw ASTs. The automatic experiment performed in the original article shows that it produces efficiently short edit scripts. However the results of this algorithm have not been validated by humans. The VDiff algorithm [10] generates edit scripts from Verilog HDL files. It is slightly similar to the first phase of GumTree, but it uses also the lexical similarity of the code. However, the generated edit scripts are specific to the VHDL language. Finally the JSync [25] algorithm is also able to compute edit scripts that include move actions. However it relies on a classical text differencing algorithm applied on the unparsed ASTs as first step. It therefore limits its ability to find moved nodes. Additionally, it is focused on producing information on clones rather than edit scripts. Graph Differencing and Origin Analysis. There are several algorithms that go beyond the AST structure and compute edit scripts on graphs representing the source code [28, 31, 2, 11]. Although these algorithms can uncover semantic differences, they are significantly harder to use in practice. Mainly because they require much more semantic information about the source code (such as program dependency graphs, class models, meta-models or control-flow graphs) which is very hard if not impossible to obtain in many languages (such as JavaScript or C), or when considering only program fragments (e.g., plug-ins). Finally there are also many algorithms that perform the so-called origin analysis (e.g., [14, 30]). These algorithms output the matching program elements between two versions. They usually use lexical as well as structural similarities between the elements. However they only consider a few kind of program elements (usually class, functions and attributes) and do not output edit scripts. 7. CONCLUSION AND FUTURE WORK In this article, we present a novel algorithm that computes fine-grained edit scripts on ASTs, including move node actions. Our algorithm is implemented in a freely-available and extensible tool. We have evaluated the running time and memory consumption of our tool, and shown that it is reasonable on real data and can be used on a daily basis. We also have performed an empirical evaluation of the results of our tool. This evaluation shows that the results of our algorithm are good, and often more comprehensible than the result of a classical text diff. As future work, we plan to extend our tool to extract modifications that are performed across files. We also plan to introduce new algorithms that can automatically process an edit script of GumTree to produce higher-order edit scripts (for instance to identify refactoring). Finally as the bottleneck of this approach is the parsing, we consider moving to more fuzzy parsers, in order to accept not well formed file and reduce the parsing time. 8. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01054552/document", "len_cl100k_base": 12612, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 44458, "total-output-tokens": 13761, "length": "2e13", "weborganizer": {"__label__adult": 0.0002684593200683594, "__label__art_design": 0.00027680397033691406, "__label__crime_law": 0.0002295970916748047, "__label__education_jobs": 0.00041556358337402344, "__label__entertainment": 5.418062210083008e-05, "__label__fashion_beauty": 0.00010979175567626952, "__label__finance_business": 0.00013768672943115234, "__label__food_dining": 0.00019884109497070312, "__label__games": 0.0004305839538574219, "__label__hardware": 0.0004887580871582031, "__label__health": 0.0001938343048095703, "__label__history": 0.00016188621520996094, "__label__home_hobbies": 5.6862831115722656e-05, "__label__industrial": 0.0001773834228515625, "__label__literature": 0.0002160072326660156, "__label__politics": 0.0001512765884399414, "__label__religion": 0.00026679039001464844, "__label__science_tech": 0.005718231201171875, "__label__social_life": 6.794929504394531e-05, "__label__software": 0.00725555419921875, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00017845630645751953, "__label__transportation": 0.0002281665802001953, "__label__travel": 0.00013840198516845703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53783, 0.04622]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53783, 0.62419]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53783, 0.90383]], "google_gemma-3-12b-it_contains_pii": [[0, 1097, false], [1097, 5986, null], [5986, 11837, null], [11837, 15341, null], [15341, 21700, null], [21700, 26369, null], [26369, 31409, null], [31409, 36399, null], [36399, 41475, null], [41475, 47005, null], [47005, 53783, null], [53783, 53783, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1097, true], [1097, 5986, null], [5986, 11837, null], [11837, 15341, null], [15341, 21700, null], [21700, 26369, null], [26369, 31409, null], [31409, 36399, null], [36399, 41475, null], [41475, 47005, null], [47005, 53783, null], [53783, 53783, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53783, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53783, null]], "pdf_page_numbers": [[0, 1097, 1], [1097, 5986, 2], [5986, 11837, 3], [11837, 15341, 4], [15341, 21700, 5], [21700, 26369, 6], [26369, 31409, 7], [31409, 36399, 8], [36399, 41475, 9], [41475, 47005, 10], [47005, 53783, 11], [53783, 53783, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53783, 0.06202]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
71d29c74ab074a2c9bc8d4e80a4ed2b8603e41ef
[REMOVED]
{"Source-Url": "https://rucforsk.ruc.dk/ws/portalfiles/portal/3406219/paper.pdf", "len_cl100k_base": 12808, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 57314, "total-output-tokens": 15954, "length": "2e13", "weborganizer": {"__label__adult": 0.0004138946533203125, "__label__art_design": 0.0005168914794921875, "__label__crime_law": 0.0005068778991699219, "__label__education_jobs": 0.0013246536254882812, "__label__entertainment": 0.0001195669174194336, "__label__fashion_beauty": 0.00021755695343017575, "__label__finance_business": 0.00038051605224609375, "__label__food_dining": 0.0005121231079101562, "__label__games": 0.0010881423950195312, "__label__hardware": 0.0016069412231445312, "__label__health": 0.0008792877197265625, "__label__history": 0.0004239082336425781, "__label__home_hobbies": 0.00018584728240966797, "__label__industrial": 0.0009412765502929688, "__label__literature": 0.0004603862762451172, "__label__politics": 0.0004622936248779297, "__label__religion": 0.0007476806640625, "__label__science_tech": 0.1737060546875, "__label__social_life": 0.0001245737075805664, "__label__software": 0.00743865966796875, "__label__software_dev": 0.80615234375, "__label__sports_fitness": 0.00037741661071777344, "__label__transportation": 0.0011930465698242188, "__label__travel": 0.0002448558807373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55723, 0.04008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55723, 0.33815]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55723, 0.84082]], "google_gemma-3-12b-it_contains_pii": [[0, 1230, false], [1230, 3952, null], [3952, 7297, null], [7297, 10648, null], [10648, 13668, null], [13668, 17505, null], [17505, 20510, null], [20510, 23845, null], [23845, 27152, null], [27152, 30563, null], [30563, 34149, null], [34149, 37582, null], [37582, 42084, null], [42084, 45300, null], [45300, 48593, null], [48593, 52146, null], [52146, 55723, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1230, true], [1230, 3952, null], [3952, 7297, null], [7297, 10648, null], [10648, 13668, null], [13668, 17505, null], [17505, 20510, null], [20510, 23845, null], [23845, 27152, null], [27152, 30563, null], [30563, 34149, null], [34149, 37582, null], [37582, 42084, null], [42084, 45300, null], [45300, 48593, null], [48593, 52146, null], [52146, 55723, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55723, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55723, null]], "pdf_page_numbers": [[0, 1230, 1], [1230, 3952, 2], [3952, 7297, 3], [7297, 10648, 4], [10648, 13668, 5], [13668, 17505, 6], [17505, 20510, 7], [20510, 23845, 8], [23845, 27152, 9], [27152, 30563, 10], [30563, 34149, 11], [34149, 37582, 12], [37582, 42084, 13], [42084, 45300, 14], [45300, 48593, 15], [48593, 52146, 16], [52146, 55723, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55723, 0.16895]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
8490a7dc131999f110f7bcaa0531580eacc93047
An Introduction to the $\pi$-Calculus Joachim Parrow* Dep. Teleinformatics, Royal Institute of Technology, Stockholm Abstract The $\pi$-calculus is a process algebra where processes interact by sending communication links to each other. This paper is an overview of and introduction to its basic theory. We explore the syntax, semantics, equivalences and axiomatisations of the most common variants. *email joachim@it.kth.se # Contents <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>1 Introduction</td> <td>3</td> </tr> <tr> <td>2 The $\pi$-Calculus</td> <td>6</td> </tr> <tr> <td>2.1 Basic Definitions</td> <td>6</td> </tr> <tr> <td>2.2 Structural Congruence</td> <td>9</td> </tr> <tr> <td>2.3 Simple Examples</td> <td>12</td> </tr> <tr> <td>3 Variants of the Calculus</td> <td>15</td> </tr> <tr> <td>3.1 Match and Mismatch</td> <td>15</td> </tr> <tr> <td>3.2 Sum</td> <td>16</td> </tr> <tr> <td>3.3 The Polyadic Calculus</td> <td>18</td> </tr> <tr> <td>3.4 Recursion and Replication</td> <td>20</td> </tr> <tr> <td>3.5 The Asynchronous Calculus</td> <td>21</td> </tr> <tr> <td>3.6 The Higher-Order Calculus</td> <td>23</td> </tr> <tr> <td>4 Operational Semantics</td> <td>25</td> </tr> <tr> <td>5 Variants of the Semantics</td> <td>28</td> </tr> <tr> <td>5.1 The Role of Structural Congruence</td> <td>28</td> </tr> <tr> <td>5.2 Symbolic Transitions</td> <td>30</td> </tr> <tr> <td>5.3 The Early Semantics</td> <td>34</td> </tr> <tr> <td>5.4 Reductions</td> <td>35</td> </tr> <tr> <td>5.5 Abstractions and Concretions</td> <td>36</td> </tr> <tr> <td>6 Bisimilarity and Congruence</td> <td>39</td> </tr> <tr> <td>6.1 Bisimilarity</td> <td>39</td> </tr> <tr> <td>6.2 Congruence</td> <td>42</td> </tr> <tr> <td>7 Variants of Bisimilarity</td> <td>43</td> </tr> <tr> <td>7.1 Early Bisimulation</td> <td>43</td> </tr> <tr> <td>7.2 Barbed Congruence</td> <td>46</td> </tr> <tr> <td>7.3 Open Bisimulation</td> <td>47</td> </tr> <tr> <td>7.4 Weak Bisimulation</td> <td>52</td> </tr> <tr> <td>8 Algebraic Theory</td> <td>54</td> </tr> <tr> <td>8.1 Bisimilarity</td> <td>54</td> </tr> <tr> <td>8.2 Congruence</td> <td>57</td> </tr> <tr> <td>9 Variants of the Theory</td> <td>60</td> </tr> <tr> <td>9.1 Early Bisimilarity and Congruence</td> <td>60</td> </tr> <tr> <td>9.2 Open Bisimilarity</td> <td>62</td> </tr> <tr> <td>9.3 Weak Congruence</td> <td>62</td> </tr> <tr> <td>10 Sources</td> <td>64</td> </tr> </tbody> </table> 1 Introduction The $\pi$-calculus is a mathematical model of processes whose interconnections change as they interact. The basic computational step is the transfer of a communication link between two processes; the recipient can then use the link for further interaction with other parties. This makes the calculus suitable for modelling systems where the accessible resources vary over time. It also provides a significant expressive power since the notions of access and resource underlie much of the theory of concurrent computation, in the same way as the more abstract and mathematically tractable concept of a function underlies functional computation. This introduction to the $\pi$-calculus is intended for a theoretically inclined reader who knows a little about the general principles of process algebra and who wishes to learn the fundamentals of the calculus and its most common and stable variants. Let us first consider an example. Suppose a server controls access to a printer and that a client wishes to use it. In the original state only the server itself has access to the printer, represented by a communication link $a$. After an interaction with the client along some other link $b$ this access to the printer has been transferred: Before interaction: ![Diagram of before interaction] After interaction: ![Diagram of after interaction] In the $\pi$-calculus this is expressed as follows: the server that sends $a$ along $b$ is $\overline{a}.S$; the client that receives some link along $b$ and then uses it to send data along it is $b(c).\pi d. P$. The interaction depicted above is formulated $$\overline{a}.S \mid b(c).\pi d. P \xrightarrow{\tau} S \mid \pi d. P$$ We see here that $a$ plays two different roles. In the interaction between the server and the client it is an object transferred from one to the other. In a further interaction between the client and the printer it is the name of the communication link. The idea that the names of the links belong to the same category as the transferred objects is one of the cornerstones of the calculus, and is one way in which it is different from other process algebras. In the example \(a, b, c, d\) are all just \textit{names} which intuitively represent access rights: \(a\) accesses the printer, \(b\) accesses the server, \(d\) accesses some data, and \(c\) is a placeholder for an access to arrive along \(a\). If \(a\) is the only way to access the printer then we can say that the printer “moves” to the client, since after the interaction nothing else can access it. For this reason the \(\pi\)-calculus has been called a calculus of “mobile” processes. But the calculus is much more general than that. The printer may have many links that make it do different things, and the server can send these links to different clients to establish different access capabilities to a shared resource. At first sight it appears as if the \(\pi\)-calculus is just a specialised form of a value-passing process algebra where the values are links. In such a comparison the calculus may be thought rather poor since there are no data types and no functions defined on the names; the transferable entities are simple atomic things without any internal structure. The reason that the \(\pi\)-calculus nevertheless is considered more expressive is that it admits migrating local scopes. This impor- tant point deserves an explanation here. Most process algebras have a way to declare a communication link local to a set of processes. For example in CCS the fact that \(P\) and \(Q\) share a private port \(a\) is symbolised by \((P|Q)\setminus a\), where the operator \(\setminus a\) is called restriction on \(a\). The significance is that no other process can use the local link \(a\), as if it were a name distinct from all other names in all processes. In the \(\pi\)-calculus this restriction is written \((\nu a)(P|Q)\). It is similar in that no other process can use \(a\) immediately as a link to \(P\) or \(Q\). The difference is that the name \(a\) is also a transferable object and as such can be sent, by \(P\) or \(Q\), to another process which then can use the restricted link. Returning to the example above suppose that \(a\) is a local link between the server and the printer. Represent the printer by \(R\), then this is captured by \((\nu a)(\overline{\delta a} . S | R)\). The server is still free to send \(a\) along \(b\) to the client. The result would be a private link shared between all three processes, but still distinct from any other name in any other process, and the transition is consequently written \[ (\nu a)(\overline{\delta a} . S | R) \mid b(c) . \overline{\delta d} . P \xrightarrow{\tau} (\nu a) (S | R | \overline{\delta d} . P) \] So, although the transferable objects are simple atomic things they can also be declared local with a defined scope, and in this way the calculus transcends the ordinary value-passing process algebras. This is also the main source of difficulty in the development of the theory because the scope of an object, as represented by the operands of its restriction, must migrate with the object as it is transferred between processes. The \(\pi\)-calculus is far from a single well defined body of work. The central idea, a process algebraic definition of link-passing, has been developed in several direc- tions to accommodate specific applications or to determine the effects of various semantics. Proliferation is certainly a healthy sign for any scientific area although it poses problems for those who wish to get a quick overview. Presumably some readers new to the π-calculus will be satisfied with a compact presentation of a single version, while other may be interested in the spectrum of variations. This paper aims to serve both these needs. In the following, the even-numbered sections develop a single strand of the calculus. Section 2 presents the syntax and give some small examples of how it is used. In Section 4 we proceed to the semantics in its most common form as a labelled transition system. In Section 6 we consider one of the main definitions of bisimulation and the congruence it induces, and in Section 8 we look at their axiomatisations through syntactic equalities of agents. These sections do not depend on the odd-numbered sections and can be considered as a basic course of the calculus. There will be full definitions and formulations of the central results, and sketches that explain the ideas and structure of the proofs. Each odd-numbered section presents variations on the material in the preceding one. Thus, in Section 3 we explore different versions of the calculus, such as the effect of varying the operators, and the asynchronous, polyadic, and higher-order calculus. Section 5 treats alternative ways to define the semantics, with different versions of labelled and unlabelled transitions. Section 7 defines a few other common bisimulation equivalences (the π-calculus, like any process algebra, boasts a wide variety of equivalences but in this paper we concentrate on the aspects particular to π), and their axiomatisations are treated in Section 9. In these sections we do not always get a full formal account, but hopefully enough explanations that the reader will gain an understanding of the basic ideas. Finally, Section 10 contains references to other work. We give a brief account of how the calculus evolved and mention other overviews and introductory papers. We also indicate sources for the material treated in this paper. It must be emphasised that there are some aspects of the π-calculus we do not treat at all, such as modal logics, analysis algorithms, implementations, and ways to use the calculus to model concurrent systems and languages. Also, the different variants can be combined in many ways, giving rise to a large variety of calculi. I hope that after this introduction a reader can explore the field with some confidence. 2 The $\pi$-Calculus We begin with a sequence of definitions and conventions. The reader who makes it to Section 2.3 will be rewarded with small but informative examples. 2.1 Basic Definitions We assume a potentially infinite set of names $\mathcal{N}$, ranged over by $a, b, \ldots, z$, which will function as all of communication ports, variables and data values, and a set of (agent) identifiers ranged over by $A$, each with a fixed nonnegative arity. The agents, ranged over by $P, Q, \ldots$ are defined Table 1. From that table we see that the agents can be of the following forms: 1. The empty agent $0$, which cannot perform any actions. 2. An Output Prefix $\pi x . P$. The intuition is that the name $x$ is sent along the name $a$ and thereafter the agent continues as $P$. So $\pi$ can be thought of as an output port and $x$ as a datum sent out from that port. 3. An Input Prefix $a(x) . P$, meaning that a name is received along a name $a$, and $x$ is a placeholder for the received name. After the input the agent will continue as $P$ but with the newly received name replacing $x$. So $a$ can be thought of as an input port and $x$ as a variable which will get its value from the input along $a$. 4. A Silent Prefix $\tau . P$, which represents an agent that can evolve to $P$ without interaction with the environment. We use $\alpha, \beta$ to range over $a(x), \pi x$ and $\tau$ and call them Prefixes, and we say that $\alpha . P$ is a Prefix form, or sometimes just Prefix when this cannot cause confusion. 5. A Sum $P + Q$ representing an agent that can enact either $P$ or $Q$. 6. A Parallel Composition $P | Q$, which represents the combined behaviour of $P$ and $Q$ executing in parallel. The components $P$ and $Q$ can act independently, and may also communicate if one performs an output and the other an input along the same port. 7. A $\text{Match if } x = y \text{ then } P$. As expected this agent will behave as $P$ if $x$ and $y$ are the same name, otherwise it does nothing. 8. A $\text{Mismatch if } x \neq y \text{ then } P$. This agent will behave as $P$ if $x$ and $y$ are not the same name, otherwise it does nothing. 9. A $\text{Restriction } (\nu x)P$. This agent behaves as $P$ but the name $x$ is local, meaning it cannot immediately be used as a port for communication between $P$ and its environment. However, it can be used for communication between components within $P$. 6 Prefixes \[\alpha ::= \pi x\] \[a(x)\] \[\tau\] \[\text{Output}\] \[\text{Input}\] \[\text{Silent}\] Agents \[P ::= 0\] \[\alpha . P\] \[P + P\] \[P | P\] \[\text{Nil}\] \[\text{Prefix}\] \[\text{Sum}\] \[\text{Parallel}\] \[\text{Match}\] \[\text{Mismatch}\] \[\nu x) P\] \[\text{Restriction}\] \[A(y_1, \ldots, y_n)\] \[\text{Identifier}\] Definitions \[A(x_1, \ldots, x_n) \overset{\text{def}}{=} P\] (where \(i \neq j \Rightarrow x_i \neq x_j\) Table 1: The syntax of the \(\pi\)-calculus. 10. An Identifier \(A(y_1, \ldots, y_n)\) where \(n\) is the arity of \(A\). Every Identifier has a Definition \(A(x_1, \ldots, x_n) \overset{\text{def}}{=} P\) where the \(x_i\) must be pairwise distinct, and the intuition is that \(A(y_1, \ldots, y_n)\) behaves as \(P\) with \(y_i\) replacing \(x_i\) for each \(i\). So a Definition can be thought of as a process declaration, \(x_1, \ldots, x_n\) as formal parameters, and the Identifier \(A(y_1, \ldots, y_n)\) as an invocation with actual parameters \(y_1, \ldots, y_n\). The operators are familiar from other process algebras so we shall in the following concentrate on some important aspects particular to the \(\pi\)-calculus, trusting the reader to be confident with the more general principles. The forms Nil, Sum and Parallel have exactly the same meaning and use as in other process algebras, and the Prefix forms are as in the algebras that admit value-passing. The if constructs Match and Mismatch may appear limited in comparison with value-passing algebras which usually admit arbitrary Boolean expressions (evaluating to either true or false). But on closer consideration it is apparent that combinations of Match and Mismatch are the only possible tests that can be performed in the \(\pi\)-calculus: the objects transmitted are just names and these have no structure and no operators are defined on them, so the only thing we can do is compare names for equality. We can combine such tests conjunctively by nesting them, for example \[\text{if } x = y \text{ then if } u \neq v \text{ then } P\] behaves as \( P \) if both \( x = y \) and \( u \neq v \) hold. We can combine them disjunctively by using Sum, for example \[ \text{if } x = y \text{ then } P + \text{if } u \neq v \text{ then } P \] behaves as \( P \) if at least one of \( x = y \) and \( u \neq v \) hold. Sometimes we shall use a binary conditional \[ \text{if } x = y \text{ then } P \text{ else } Q \] as an abbreviation for \( \text{if } x = y \text{ then } P + \text{if } x \neq y \text{ then } Q \). As in other algebras we say that \( P \) is guarded in \( Q \) if \( P \) is a proper subterm of a Prefix form in \( Q \). Also, the input Prefix \( a(x) \cdot P \) is said to bind \( x \) in \( P \), and occurrences of \( x \) in \( P \) are then called bound. In contrast the output Prefix \( \pi x \cdot P \) does not bind \( x \). These Prefixes are said to have subject \( a \) and object \( x \), where the object is called free in the output Prefix and bound in the input Prefix. The silent Prefix \( \tau \) has neither subject nor object. The Restriction operator \( (\nu x)P \) also binds \( x \) in \( P \). Its effect is as in other algebras (where it is written \( \backslash x \) in CCS and \( \delta_x \) in ACP) with one significant difference. In ordinary process algebras the things that are restricted are port names and these cannot be transmitted between agents. Therefore the restriction is static in the sense that the scope of a restricted name does not need to change when an agent executes. In the \( \pi \)-calculus there is no difference between "port names" and "values", and a name that represents a port can indeed be transmitted between agents. If that name is restricted the scope of the restriction must change, as we shall see, and indeed almost all of the increased complexity and expressiveness of the \( \pi \)-calculus over value-passing algebras come from the fact that restricted things move around. The reader may also think of \( (\nu x)P \) as "new \( x \) in \( P \)", by analogy with the object-oriented use of the word "new", since this construct can be thought of as declaring a new and hitherto unused name, represented by \( x \) for the benefit of \( P \). In summary, both input Prefix and Restriction bind names, and we can define the bound names \( \text{bn}(P) \) as those with a bound occurrence in \( P \) and the free names \( \text{fn}(P) \) as those with a not bound occurrence, and similarly \( \text{bn}(\alpha) \) and \( \text{fn}(\alpha) \) for a Prefix \( \alpha \). We sometimes write \( \text{fn}(P, Q) \) to mean \( \text{fn}(P) \cup \text{fn}(Q) \), and just \( \alpha \) for \( \text{fn}(\alpha) \cup \text{bn}(\alpha) \) when it is apparent that it represents a set of names, such as in "\( x \in \alpha \)". In a Definition \( A(x_1, \ldots, x_n) \overset{\text{def}}{=} P \) we assume that \( \text{fn}(P) \subseteq \{x_1, \ldots, x_n\} \). In some examples we shall elide the parameters of Identifiers and Definitions when they are unimportant or can be inferred from context. A substitution is a function from names to names. We write \( \{x/y\} \) for the substitution that maps \( y \) to \( x \) and is identity for all other names, and in general \( \{x_1, \ldots, x_n/y_1, \ldots, y_n\} \), where the \( y_i \) are pairwise distinct, for a function that maps each \( y_i \) to \( x_i \). We use \( \sigma \) to range over substitutions, and sometimes write \( \tilde{x} \) for a sequence of names when the length is unimportant or can be inferred from context. The agent $P\sigma$ is $P$ where all free names $x$ are replaced by $\sigma(x)$, with alpha-conversion wherever needed to avoid captures. This means that bound names are renamed such that whenever $x$ is replaced by $\sigma(x)$ then the so obtained occurrence of $\sigma(x)$ is free. For example, $$\langle a(x) \cdot (\nu b) x b \cdot \bar{\tau} y \cdot 0 \rangle \{x b / y c\} \quad \text{is} \quad a(z) \cdot (\nu d) x d \cdot \bar{\tau} x \cdot 0$$ A process algebra fan may have noticed that one common operator is not present in the $\pi$-calculus: that of relabelling (in CCS written $[a / b]$). The primary use of relabelling is to define instances of agents from other agents, for example, if $B$ is a buffer with ports $i$ and $o$ then $B[i'/i, o'/o]$ is a buffer with ports $i'$ and $o'$. In the $\pi$-calculus we will instead define instances through the parameters of the Identifiers, so for example a buffer with ports $i$ and $o$ is $B(i, o)$, and with ports $i'$ and $d$ it is $B(i', d')$. For injective relabellings this is just another style of specification which allows us to economise on one operator. (A reader familiar with the CCS relabelling should be warned that it has the same effect as port substitution only if injective. In general they are different.) Finally some notational conventions: A sum of several agents $P_1 + \cdots + P_n$ is written $\sum_{i=1}^n P_i$, or just $\sum_i P_i$ when $n$ is unimportant or obvious, and we here allow the case $n = 0$ when the sum means 0. A sequence of distinct Restrictions $(\nu x_1) \cdots (\nu x_n) P$ is often abbreviated to $(\nu x_1 \cdots x_n) P$. In a Prefix we sometimes elide the object if it is not important, so $a \cdot P$ means $a(x) \cdot P$ where $x$ is a name that is never used, and similarly for output. And we sometimes elide a trailing 0, writing $\alpha$ for the agent $\alpha \cdot 0$, where this cannot cause confusion. We give the unary operators precedence over the binary and $|$ precedence over $+$, so for example $(\nu x) P \mid Q + R$ means $((\nu x) P) \mid (Q + R)$. ### 2.2 Structural Congruence The syntax of agents is in one sense too concrete. For example, the agents $a(x) \cdot \bar{\tau} x$ and $a(y) \cdot \bar{\tau} y$ are syntactically different, although they only differ in the choice of bound name and therefore intuitively represent the same behaviour: an agent that inputs something along $a$ and then sends that along $\bar{\tau}$. As another example the agents $P | Q$ and $Q | P$ represent the same thing: a parallel composition of the agents $P$ and $Q$. Our intuition about parallel composition is that it is inherently unordered, and we are forced to syntactically distinguish between $P | Q$ and $Q | P$ only because our language is linear. We therefore introduce a structural congruence to identify the agents which intuitively represent the same thing. It should be emphasised that this has nothing to do with the traditional behavioural equivalences in process algebra which are defined in terms of the behaviour exhibited by an agent under some operational semantics. We have yet to define a semantics, and the structural congruence identifies only agents where it is immediately obvious from their structure that they are the same. The structural congruence \( \equiv \) is defined as the smallest congruence satisfying the following laws: 1. If \( P \) and \( Q \) are variants of alpha-conversion then \( P \equiv Q \). 2. The Abelian monoid laws for Parallel: commutativity \( P|Q \equiv Q|P \), associativity \( (P|Q)|R \equiv P|(Q|R) \), and \( 0 \) as unit \( P|0 \equiv P \); and the same laws for Sum. 3. The unfolding law \( A(\bar{y}) \equiv P\{\bar{y}/\bar{x}\} \) if \( A(\bar{x}) \overset{\text{def}}{=} P \). 4. The scope extension laws \[ \begin{align*} (\nu x)0 & \equiv 0 \\ (\nu x)(P \mid Q) & \equiv P \mid (\nu x)Q \quad \text{if } x \not\in \text{fn}(P) \\ (\nu x)(P + Q) & \equiv P + (\nu x)Q \quad \text{if } x \not\in \text{fn}(P) \\ (\nu x)\text{if } u = v \text{ then } P & \equiv \text{if } u = v \text{ then } (\nu x)P \quad \text{if } x \not= u \text{ and } x \not= v \\ (\nu x)\text{if } u \neq v \text{ then } P & \equiv \text{if } u \neq v \text{ then } (\nu x)P \quad \text{if } x \not= u \text{ and } x \not= v \\ (\nu x)(\nu y)P & \equiv (\nu y)(\nu x)P \end{align*} \] Table 2: The definition of structural congruence. The reader will here correctly object that “represent the same thing” and “immediately obvious” are not formally defined concepts, and indeed several different versions of the structural congruence can be found in the literature; there is no canonical definition and each has different merits. In Section 5.1 we will meet some of them and explore their consequences. Until then we adopt a particular structural congruence. The definition is given in Table 2. We briefly comment on the clauses in the definition. 1. Alpha-conversion, i.e., choice of bound names, identifies agents like \( a(x) . \overline{tx} \) and \( a(y) . \overline{ty} \). 2. The Abelian monoid laws mean that Parallel and Sum are unordered. For example, when we think of a composition of three agents \( P,Q,R \) it does not matter if we write it as \( (P|Q)|R \) or \( (R|Q)|P \). The same holds for Sum. The fact that \( 0 \) is a unit means that \( P|0 \equiv P \) and \( P + 0 \equiv P \), something which follows from the intuition that \( 0 \) is empty and therefore contributes nothing to a Parallel composition or Sum. 3. The unfolding just says that an Identifier is the same as its Definition, with the appropriate parameter instantiation. 4. The scope extension laws come from our intuition that \( (\nu x)P \) just says that \( x \) is a new unique name in \( P \); it can be thought of as marking the occurrences of \( x \) in \( P \) with a special colour saying that this is a local name. It then does not really matter where the symbols \( "(\nu x)" \) are placed as long as they mark the same occurrences. For example, in \( 0 \) there are no occurrences so the Restriction can be removed at will. In Parallel composition, if all occurrences are in one of the components then it does not matter if the Restriction covers only that component or the whole composition. Note that we do not have that \( (\nu x)(P \mid Q) \equiv (\nu x)P \mid (\nu x)Q \). The same occurrences are restricted in both agents, but in \( (\nu x)(P \mid Q) \) they are restricted by the same binder (or if you will, coloured by the same colour), meaning that \( P \) and \( Q \) can interact using \( x \), in contrast to the situation in \( (\nu x)P \mid (\nu x)Q \). Through a combination of these laws we get that \( (\nu x)P \equiv P \) if \( x \not\in \text{fn}(P) \): \[ P \equiv P \mid 0 \equiv P \mid (\nu x)0 \equiv (\nu x)(P \mid 0) \equiv (\nu x)P \] So as a special case we get \( (\nu x)(\nu x)P \equiv (\nu x)P \) for all \( P \). Another key fact is that all unguarded Restrictions can be pulled out to the top level of an agent: **Proposition 1** Let \( P \) be an agent where \( (\nu x)Q \) is an unguarded subterm. Then \( P \) is structurally congruent to an agent \( (\nu x')P' \) where \( P' \) is obtained from \( P \) by replacing \( (\nu x)Q \) with \( Q\{x'/x\} \), for some name \( x' \) not occurring in \( P \). The proof is by alpha-converting all bound names so that they become syntacti- cally distinct, and then applying scope extension (from right to left) to move the Restriction to the outermost level. This corresponds to the intuition that instead of declaring something as local it can be given a syntactically distinct name: the effect is the same in that nothing else can access the name. Our scope extension laws are in fact chosen precisely such that Proposition 1 holds. For example, we have not given any scope extension law for Prefixes and can therefore only pull out unguarded Restrictions. The reader may have expected a law like \((\nu x)\alpha . P \equiv \alpha . (\nu x)P\) for \( x \not\in \alpha \). Indeed such a law would be sound, in the sense that it conforms to intuition and does not disrupt any of the results in this paper, and it will hold for the behavioural equivalences explored later in sections 6 and 7. But it will not be necessary at this point, in particular it is not necessary to prove Proposition 1. Structural congruence is much stronger, i.e., identifies fewer agents, than any of the behavioural equivalences. The structural congruence is used in the defini- tion of the operational semantics, which in turn is used to define the behavioural equivalences. The main technical reasons for taking this route are that many of the following definitions and explanations become simpler and that we get a uni- form treatment for those variants of the calculus that actually require a structural congruence. In Section 5.1 we comment on the possibility to define the calculus without a structural congruence. 2.3 Simple Examples Although we shall not present the operational semantics just yet (a reader who wishes to look at it now will find it in Section 4) it might be illuminating to see some examples of the scope migration mentioned in Section 1, that Restrictions move with their objects. Formally, scope migration is a consequence of three straightforward postulates. The first is the usual law for inferring interactions between parallel components. This is present in most process algebras and implies that \[ a(x) \cdot \overline{a} \mid \alpha b \quad \xrightarrow{r} \quad \overline{\alpha} b \mid 0 \] or in general \[ a(x) \cdot P \mid \alpha b \cdot Q \quad \xrightarrow{r} \quad P[b/x] \mid Q \] The second postulate is that Restrictions do not affect silent transitions. \( P \xrightarrow{r} Q \) represents an interaction between the components of \( P \), and a Restriction \((\nu x)P\) only restricts interactions between \( P \) and its environment. Therefore \( P \xrightarrow{r} Q \) implies \((\nu x)P \xrightarrow{r} (\nu x)Q\). The third postulate is that structurally congruent agents should never be distinguished and thus any semantics must assign them the same behaviour. Now what are the implications for restricted objects? Suppose that \( b \) is a restricted name, i.e., that we are considering a composition \[ a(x) \cdot \overline{a} \mid (\nu b)\overline{\alpha} b \] Will there be an interaction between the components and if so what should it be? Structural congruence gives the answer, because \( b \) is not free in the left hand component so the agent is by scope extension structurally congruent to \[ (\nu b)(a(x) \cdot \overline{a} \mid \overline{\alpha} b) \] and this agent has a transition between the components: because of \[ a(x) \cdot \overline{a} \mid \overline{\alpha} b \quad \xrightarrow{r} \quad \overline{\alpha} b \mid 0 \] we get that \[ (\nu b)(a(x) \cdot \overline{a} \mid \overline{\alpha} b) \quad \xrightarrow{r} \quad (\nu b)(\overline{\alpha} b \mid 0) \] and the rightmost \( 0 \) can be omitted by the monoid laws. So by identifying structurally congruent agents we obtain that \[ a(x) \cdot \overline{a} \mid (\nu b)\overline{\alpha} b \quad \xrightarrow{r} \quad (\nu b)\overline{\alpha} b \] or in general that, provided \( b \not\in \text{fn}(P) \), \[ a(x) \cdot P \mid (\nu b)\overline{\alpha} b \cdot Q \quad \xrightarrow{r} \quad (\nu b)(P[b/x] \mid Q) \] In other words, the scope of \((\nu b)\) “moves” with \( b \) from the right hand component to the left. This phenomenon is sometimes called scope extrusion. If \( b \in \text{fn}(P) \) a similar interaction is possible by first alpha-converting the bound $b$ to some name $b' \not\in \text{fn}(P)$, and we would get \[a(x) \cdot P \mid (\nu b)\pi b \cdot Q \overset{\tau}{\longrightarrow} (\nu b')(P\{b' / x\} \mid Q\{b' / b\})\] So $P\{b' / x\}$ still contains $b$ free and it is not the same as the received restricted name $b'$. For another example consider: \[((\nu b)a(x) \cdot P) \mid \pi b \cdot Q\] Here the right hand component has a free $b$ which should not be the same as the bound $b$ to the left. Is there an interaction between the components? We cannot immediately extend the scope to the right hand component since it has $b$ free. But we can first alpha-convert the bound $b$ to some new name $b'$ and then extend the scope to obtain \[(\nu b')(a(x) \cdot P\{b' / b\} \mid \pi b \cdot Q)\] and it is clear that we have a transition \[(\nu b')(a(x) \cdot P\{b' / b\} \mid \pi b \cdot Q) \overset{\tau}{\longrightarrow} (\nu b')P\{b' / b\}\{b / x\} \mid Q\] So the restricted name, now $b'$, will still be local to the left hand component; the attempt to intrude the scope is thwarted by an alpha-conversion. In summary, through alpha-conversion and scope extension we can send restricted names as objects, and Restrictions will always move with the objects and never include free occurrences of that name. This ability to send scopes along with restricted names is what makes the calculus convenient for modelling exchange of private resources. For example, suppose we have an agent $R$ representing a resource, say a printer, and that it is controlled by a server $S$ which distributes access rights to $R$. In the simplest case the access right is just to execute $R$. This can be modelled by introducing a new name $e$ as a trigger, and guarding $R$ by that name, as in \[(\nu e)(S \mid e \cdot R)\] Here $R$ cannot execute until it receives a signal on $e$. The server can invoke it by performing an action $\pi$, but moreover, the server can send $e$ to a client wishing to use $R$. For example, suppose that a client $Q$ needs the printer. It asks $S$ along some predetermined channel $c$ for the access key, here $e$, to $R$, and only upon receipt of this key can $R$ be executed. We have \[c(x) \cdot \pi \cdot Q \mid (\nu e)(\pi e \cdot S \mid e \cdot R) \overset{\tau}{\longrightarrow} (\nu e)(\pi \cdot Q \mid S \mid e \cdot R) \overset{\tau}{\longrightarrow} (\nu e)(Q \mid S \mid R)\] The first transition means that $Q$ receives an access to $R$ and the second that this access is used. We can informally think of this as if the agent $R$ is transmitted (represented by its key $e$) from $S$ to $Q$, so in a sense this gives us the power of a higher-order communication where the objects are agents and not only names. But our calculus is more general since a server can send $e$ to many clients, meaning that these will share $R$ (rather than receiving separate copies of $R$). And $R$ can have several keys that make it do different things, for example $R$ can be $e_1 . R_1 \mid e_2 . R_2 \cdots$, and the server can send only some of the keys to clients and retain some for itself, or send different keys to different clients representing different access privileges. A related matter is if $S$ wishes to send two names $d$ and $e$ to a client, and insure that the same client receives both names. If there are several clients then the simple solution of transmitting $d$ and $e$ along predetermined channels may mean that one client receives $d$ and another $e$. A better solution is to first establish a private channel with a client and then send $d$ and $e$ along that channel. The private channel is simply a restricted name: $$(\nu p)[\overline{e} . p \cdot \overline{d} . \overline{e} \cdot S \cdot p]$$ A client interacting with $C$ must be prepared to receive a name, and then along that name receive $d$ and $e$: $c(p) \cdot p(x) \cdot p(y) \cdot Q$ Now, even if we have a composition with several clients and a server, the only possibility is that $d$ and $e$ end up with the same client. This feature is so common that we introduce an abbreviation for it: $$\overline{e_1 \cdots e_n} . P \quad \text{means} \quad (\nu p)[\overline{e} . p \cdot \overline{e} \cdots] . P$$ $$c(x_1 \cdots x_n) . Q \quad \text{means} \quad c(p) \cdot p(x_1) \cdots p(x_n) . Q$$ where we choose $p \not\in \text{fn}(P, Q)$ and all $x_i$ are pairwise distinct. We will then have $$\overline{e_1 \cdots e_n} . P \mid c(x_1 \cdots x_n) . Q \xrightarrow{\tau} \cdots \xrightarrow{\tau} P \mid Q\{e_1 \cdots e_n / x_1 \cdots x_n\}$$ The idea to establish private links in this way has many other uses. Suppose for example that $Q$ wishes to execute $P$ by transmitting on its trigger $e$, and then also wait until $P$ has completed execution. One way to represent this is to send to $P$ a private name for signalling completion, as in $$(\nu r)[\overline{e} . r \cdot Q \mid e(x)] . P \xrightarrow{\tau} (\nu r)(r \cdot Q \mid P[r/x])$$ Here $Q$ must wait until someone signals on $r$ before continuing. This someone can only be $P$ since no other is in the scope of $r$. This scheme is quite general, for example $P$ can delegate to another agent the task to restart $Q$, by sending $r$ to it as an object in an interaction. The $\pi$-calculus has been used to succinctly describe many aspects of concurrent and functional programming, and also of high-level system description where mobility plays an important role. We shall not attempt an overview of all applications here. In the rest of this paper we concentrate on some central aspects of the theory of the calculus.
{"Source-Url": "http://courses.cs.vt.edu/cs5204/fall09-kafura/Papers/PICalculus/Pi-Calculus-Introduction.pdf", "len_cl100k_base": 9825, "olmocr-version": "0.1.48", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 70832, "total-output-tokens": 10629, "length": "2e13", "weborganizer": {"__label__adult": 0.0005564689636230469, "__label__art_design": 0.0007853507995605469, "__label__crime_law": 0.0004818439483642578, "__label__education_jobs": 0.0019407272338867188, "__label__entertainment": 0.00025582313537597656, "__label__fashion_beauty": 0.00024080276489257812, "__label__finance_business": 0.0008797645568847656, "__label__food_dining": 0.0007758140563964844, "__label__games": 0.0010442733764648438, "__label__hardware": 0.0011577606201171875, "__label__health": 0.0015811920166015625, "__label__history": 0.0005450248718261719, "__label__home_hobbies": 0.0002366304397583008, "__label__industrial": 0.0009608268737792968, "__label__literature": 0.00206756591796875, "__label__politics": 0.0005283355712890625, "__label__religion": 0.0010051727294921875, "__label__science_tech": 0.468505859375, "__label__social_life": 0.00029730796813964844, "__label__software": 0.009796142578125, "__label__software_dev": 0.50439453125, "__label__sports_fitness": 0.0003867149353027344, "__label__transportation": 0.0012454986572265625, "__label__travel": 0.00028252601623535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36457, 0.01431]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36457, 0.58167]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36457, 0.88738]], "google_gemma-3-12b-it_contains_pii": [[0, 523, false], [523, 3232, null], [3232, 5177, null], [5177, 8672, null], [8672, 11182, null], [11182, 13616, null], [13616, 15726, null], [15726, 19241, null], [19241, 22521, null], [22521, 25053, null], [25053, 28205, null], [28205, 30835, null], [30835, 33435, null], [33435, 36457, null]], "google_gemma-3-12b-it_is_public_document": [[0, 523, true], [523, 3232, null], [3232, 5177, null], [5177, 8672, null], [8672, 11182, null], [11182, 13616, null], [13616, 15726, null], [15726, 19241, null], [19241, 22521, null], [22521, 25053, null], [25053, 28205, null], [28205, 30835, null], [30835, 33435, null], [33435, 36457, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36457, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36457, null]], "pdf_page_numbers": [[0, 523, 1], [523, 3232, 2], [3232, 5177, 3], [5177, 8672, 4], [8672, 11182, 5], [11182, 13616, 6], [13616, 15726, 7], [15726, 19241, 8], [19241, 22521, 9], [22521, 25053, 10], [25053, 28205, 11], [28205, 30835, 12], [30835, 33435, 13], [33435, 36457, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36457, 0.13553]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2a11f443eee0c80b6b8fcdc1b62759af1e34b05e
[REMOVED]
{"len_cl100k_base": 9537, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43010, "total-output-tokens": 16201, "length": "2e13", "weborganizer": {"__label__adult": 0.00031447410583496094, "__label__art_design": 0.0005030632019042969, "__label__crime_law": 0.00040435791015625, "__label__education_jobs": 0.0029048919677734375, "__label__entertainment": 6.586313247680664e-05, "__label__fashion_beauty": 0.0001360177993774414, "__label__finance_business": 0.000362396240234375, "__label__food_dining": 0.00030612945556640625, "__label__games": 0.0005660057067871094, "__label__hardware": 0.0004239082336425781, "__label__health": 0.0005326271057128906, "__label__history": 0.00021255016326904297, "__label__home_hobbies": 5.131959915161133e-05, "__label__industrial": 0.00018608570098876953, "__label__literature": 0.00036072731018066406, "__label__politics": 0.0004127025604248047, "__label__religion": 0.0003368854522705078, "__label__science_tech": 0.01003265380859375, "__label__social_life": 9.697675704956056e-05, "__label__software": 0.0124359130859375, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00017547607421875, "__label__transportation": 0.0003101825714111328, "__label__travel": 0.00019276142120361328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68827, 0.05634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68827, 0.42949]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68827, 0.90235]], "google_gemma-3-12b-it_contains_pii": [[0, 2600, false], [2600, 8263, null], [8263, 12119, null], [12119, 18131, null], [18131, 22842, null], [22842, 28443, null], [28443, 34129, null], [34129, 38145, null], [38145, 43633, null], [43633, 49250, null], [49250, 56132, null], [56132, 59699, null], [59699, 67221, null], [67221, 68827, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2600, true], [2600, 8263, null], [8263, 12119, null], [12119, 18131, null], [18131, 22842, null], [22842, 28443, null], [28443, 34129, null], [34129, 38145, null], [38145, 43633, null], [43633, 49250, null], [49250, 56132, null], [56132, 59699, null], [59699, 67221, null], [67221, 68827, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68827, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68827, null]], "pdf_page_numbers": [[0, 2600, 1], [2600, 8263, 2], [8263, 12119, 3], [12119, 18131, 4], [18131, 22842, 5], [22842, 28443, 6], [28443, 34129, 7], [34129, 38145, 8], [38145, 43633, 9], [43633, 49250, 10], [49250, 56132, 11], [56132, 59699, 12], [59699, 67221, 13], [67221, 68827, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68827, 0.03]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
3737d3e2b28e5d682f03314b46bfb52b153e42a5
[REMOVED]
{"len_cl100k_base": 11289, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 36518, "total-output-tokens": 12188, "length": "2e13", "weborganizer": {"__label__adult": 0.0004165172576904297, "__label__art_design": 0.0006732940673828125, "__label__crime_law": 0.0002536773681640625, "__label__education_jobs": 0.001392364501953125, "__label__entertainment": 8.52346420288086e-05, "__label__fashion_beauty": 0.0001785755157470703, "__label__finance_business": 0.00015091896057128906, "__label__food_dining": 0.00024366378784179688, "__label__games": 0.0006470680236816406, "__label__hardware": 0.00164031982421875, "__label__health": 0.0003151893615722656, "__label__history": 0.00017511844635009766, "__label__home_hobbies": 9.888410568237303e-05, "__label__industrial": 0.0002906322479248047, "__label__literature": 0.00017714500427246094, "__label__politics": 0.00015926361083984375, "__label__religion": 0.0003666877746582031, "__label__science_tech": 0.011566162109375, "__label__social_life": 8.33272933959961e-05, "__label__software": 0.00614166259765625, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.00024700164794921875, "__label__transportation": 0.0004239082336425781, "__label__travel": 0.0001766681671142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54115, 0.02744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54115, 0.4369]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54115, 0.93079]], "google_gemma-3-12b-it_contains_pii": [[0, 5648, false], [5648, 11187, null], [11187, 15921, null], [15921, 22504, null], [22504, 25548, null], [25548, 32607, null], [32607, 37837, null], [37837, 42980, null], [42980, 47497, null], [47497, 54115, null], [54115, 54115, null], [54115, 54115, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5648, true], [5648, 11187, null], [11187, 15921, null], [15921, 22504, null], [22504, 25548, null], [25548, 32607, null], [32607, 37837, null], [37837, 42980, null], [42980, 47497, null], [47497, 54115, null], [54115, 54115, null], [54115, 54115, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54115, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54115, null]], "pdf_page_numbers": [[0, 5648, 1], [5648, 11187, 2], [11187, 15921, 3], [15921, 22504, 4], [22504, 25548, 5], [25548, 32607, 6], [32607, 37837, 7], [37837, 42980, 8], [42980, 47497, 9], [47497, 54115, 10], [54115, 54115, 11], [54115, 54115, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54115, 0.06081]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
fb2c60aa00550e33c2f95f24ee784f8df8d90488
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-319-11203-9_10.pdf", "len_cl100k_base": 11506, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 47518, "total-output-tokens": 14326, "length": "2e13", "weborganizer": {"__label__adult": 0.0006451606750488281, "__label__art_design": 0.0005583763122558594, "__label__crime_law": 0.003391265869140625, "__label__education_jobs": 0.0008616447448730469, "__label__entertainment": 0.00019359588623046875, "__label__fashion_beauty": 0.0003070831298828125, "__label__finance_business": 0.00035762786865234375, "__label__food_dining": 0.00032591819763183594, "__label__games": 0.0022373199462890625, "__label__hardware": 0.0054473876953125, "__label__health": 0.00057220458984375, "__label__history": 0.0003795623779296875, "__label__home_hobbies": 0.00014913082122802734, "__label__industrial": 0.00064849853515625, "__label__literature": 0.0004477500915527344, "__label__politics": 0.0005235671997070312, "__label__religion": 0.0005474090576171875, "__label__science_tech": 0.1866455078125, "__label__social_life": 0.0001569986343383789, "__label__software": 0.08746337890625, "__label__software_dev": 0.70703125, "__label__sports_fitness": 0.0003342628479003906, "__label__transportation": 0.00040650367736816406, "__label__travel": 0.00015544891357421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58655, 0.0463]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58655, 0.51292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58655, 0.873]], "google_gemma-3-12b-it_contains_pii": [[0, 2777, false], [2777, 5969, null], [5969, 8463, null], [8463, 10697, null], [10697, 14082, null], [14082, 16942, null], [16942, 19061, null], [19061, 22754, null], [22754, 25314, null], [25314, 28759, null], [28759, 31803, null], [31803, 34905, null], [34905, 38266, null], [38266, 40893, null], [40893, 43709, null], [43709, 47229, null], [47229, 50568, null], [50568, 53503, null], [53503, 57137, null], [57137, 58655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2777, true], [2777, 5969, null], [5969, 8463, null], [8463, 10697, null], [10697, 14082, null], [14082, 16942, null], [16942, 19061, null], [19061, 22754, null], [22754, 25314, null], [25314, 28759, null], [28759, 31803, null], [31803, 34905, null], [34905, 38266, null], [38266, 40893, null], [40893, 43709, null], [43709, 47229, null], [47229, 50568, null], [50568, 53503, null], [53503, 57137, null], [57137, 58655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58655, null]], "pdf_page_numbers": [[0, 2777, 1], [2777, 5969, 2], [5969, 8463, 3], [8463, 10697, 4], [10697, 14082, 5], [14082, 16942, 6], [16942, 19061, 7], [19061, 22754, 8], [22754, 25314, 9], [25314, 28759, 10], [28759, 31803, 11], [31803, 34905, 12], [34905, 38266, 13], [38266, 40893, 14], [40893, 43709, 15], [43709, 47229, 16], [47229, 50568, 17], [50568, 53503, 18], [53503, 57137, 19], [57137, 58655, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58655, 0.15816]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
cb962ec6822d464600d3189026ef15b7868dba09
Abstract—In this paper we present MAYHEM, a new system for automatically finding exploitable bugs in binary (i.e., executable) programs. Every bug reported by MAYHEM is accompanied by a working shell-spawning exploit. The working exploits ensure soundness and that each bug report is security-critical and actionable. MAYHEM works on raw binary code without debugging information. To make exploit generation possible at the binary-level, MAYHEM addresses two major technical challenges: actively managing execution paths without exhausting memory, and reasoning about symbolic memory indices, where a load or a store address depends on user input. To this end, we propose two novel techniques: 1) hybrid symbolic execution for combining online and offline (concolic) execution to maximize the benefits of both techniques, and 2) index-based memory modeling, a technique that allows MAYHEM to efficiently reason about symbolic memory at the binary level. We used MAYHEM to find and demonstrate 29 exploitable vulnerabilities in both Linux and Windows programs, 2 of which were previously undocumented. Keywords—hybrid execution, symbolic memory, index-based memory modeling, exploit generation I. INTRODUCTION Bugs are plentiful. For example, the Ubuntu Linux bug management database currently lists over 90,000 open bugs [18]. However, bugs that can be exploited by attackers are typically the most serious, and should be patched first. Thus, a central question is not whether a program has bugs, but which bugs are exploitable. In this paper we present MAYHEM, a sound system for automatically finding exploitable bugs in binary (i.e., executable) programs. MAYHEM produces a working control-hijack exploit for each bug it reports, thus guaranteeing each bug report is actionable and security-critical. MAYHEM works on raw binary code—the binary level. MAYHEM finds exploitable paths by augmenting symbolic execution [17] with additional constraints at potentially vulnerable program points. The constraints include details such as whether an instruction pointer can be redirected, whether we can position attack code in memory, and ultimately, whether we can execute attacker’s code. If the resulting formula is satisfiable, then an exploit is possible. A main challenge in exploit generation is exploring enough of the state space of an application to find exploitable paths. In order to tackle this problem, MAYHEM’s design is based on four main principles: 1) the system should be able to make forward progress for arbitrarily long times—ideally run “forever”—without exceeding the given resources (especially memory), 2) in order to maximize performance, the system should not repeat work, 3) the system should not throw away any work—previous analysis results of the system should be reusable on subsequent runs, and 4) the system should be able to reason about symbolic memory where a load or store address depends on user input. Handling memory addresses is essential to exploit real-world bugs. Principle #1 is necessary for running complex applications, since most non-trivial programs will contain a potentially infinite number of paths to explore. Current approaches to symbolic execution, e.g., CUTE [27], BitBlaze [6], KLEE [10], SAGE [14], McVeto [28], AEG [3], S2E [29], and others [4], [22], do not satisfy all the above design points. Conceptually, current executors can be divided into two main categories: offline executors — which concretely run a single execution path and then symbolically execute it (also known as trace-based or concolic executors, e.g., SAGE), and online executors — which try to execute all possible paths in a single run of the system (e.g., S2E). Neither online nor offline executors satisfy principles #1-#3. In addition, most symbolic execution engines do not reason about symbolic memory, thus do not meet principle #4. Offline symbolic executors [6], [14] reason about a single execution path at a time. Principle #1 is satisfied by iteratively picking new paths to explore. Further, every run of the system is independent from the others and thus results of previous runs can be immediately reused, satisfying principle #3. However, offline does not satisfy principle #2. Every run of the system needs to restart execution of the program from the very beginning. Conceptually, the same instructions need to be executed repeatedly for every execution trace. Our experimental results show that this re-execution can be very expensive (see §VIII). Online symbolic execution [10], [29] forks at each branch point. Previous instructions are never re-executed, but the continued forking puts a strain on memory, slowing down the execution engine as the number of branches increase. The result is no forward progress and thus principles #1 and #3 are not met. Some online executors such as KLEE stop forking to avoid being slowed down by their memory... use. Such executors satisfy principle #1 but not principle #3 (interesting paths are potentially eliminated). MAYHEM combines the best of both worlds by introducing hybrid symbolic execution, where execution alternates between online and offline symbolic execution runs. Hybrid execution acts like a memory manager in an OS, except that it is designed to efficiently swap out symbolic execution engines. When memory is under pressure, the hybrid engine picks a running executor, and saves the current execution state, and path formula. The thread is restored by restoring the formula, concretely running the program up to the previous execution state, and then continuing. Caching the path formulas prevents the symbolic re-execution of instructions, which is the bottleneck in offline, while managing memory more efficiently than online execution. MAYHEM also proposes techniques for efficiently reasoning about symbolic memory. A symbolic memory access occurs when a load or store address depends on input. Symbolic pointers are very common at the binary level, and being able to reason about them is necessary to generate control-hijack exploits. In fact, our experiments show that 40% of the generated exploits would have been impossible due to concretization constraints (§VIII). To overcome this problem, MAYHEM employs an index-based memory model (§V) to avoid constraining the index whenever possible. Results are encouraging. While there is ample room for new research, MAYHEM currently generates exploits for several security vulnerabilities: buffer overflows, function pointer overwrites, and format string vulnerabilities for 29 different programs. MAYHEM also demonstrates 2-10× speedup over offline symbolic execution without having the memory constraints of online symbolic execution. Overall, MAYHEM makes the following contributions: 1) Hybrid execution. We introduce a new scheme for symbolic execution—which we call hybrid symbolic execution—that allows us to find a better balance between speed and memory requirements. Hybrid execution enables MAYHEM to explore multiple paths faster than existing approaches (see §IV). 2) Index-based memory modeling. We propose index-based memory model as a practical approach to dealing with symbolic indices at the binary-level. (see §V). 3) Binary-only exploit generation. We present the first end-to-end binary-only exploitable bug finding system that demonstrates exploitability by outputting working control hijack exploits. II. OVERVIEW OF MAYHEM In this section we describe the overall architecture, usage scenario, and challenges for finding exploitable bugs. We use an HTTP server, orzHttpd [1]—shown in Figure 1a—as an example to highlight the main challenges and present how MAYHEM works. Note that we show source for clarity and simplicity: MAYHEM runs on binary code. 1 #define BUFSIZE 4096 2 typedef struct { 3 char buf[BUFSIZE]; 4 int used; 5 } STATIC_BUFFER_t; 6 typedef struct conn { 7 STATIC_BUFFER_t read_buf; 8 ... // omitted 9 } CONN_t; 10 11 static void serverlog(LOG_TYPE_t type, 12 const char *format, ...) 13 { ... 14 if(format != NULL) { 15 va_start(ap, format); 16 vsprintf(buf, format, ap); 17 va_end(ap); 18 } 19 fprintf(log, buf); // vulnerable point 20 fflush(log); 21 } 22 23 void HTTP_STATE_t http_read_request(CONN_t *conn) 24 { ... // omitted 25 while(conn->read_buf.used < BUFSIZE) { 26 sz = static_buffer_read(conn, &conn->read_buf); 27 if(sz < 0) { 28 break; 29 } 30 conn->read_buf.used += sz; 31 if(memcmp(&conn->read_buf.buf[conn->read_buf.used] - 4, "/\r\n\r\n", 4) == 0) 32 return HTTP_STATE_ERROR; 33 conn->read_buf.buf[conn->read_buf.used] = 4, "/\r\n\r\n", 4) 34 } ... 35 serverlog(ERROR_LOG, "%s\n", 36 conn->read_buf.buf); 37 ... 38 } (a) Code snippet. (b) Stack diagram of the vulnerable program. Figure 1: orzHttpd vulnerability In \texttt{orzHttpd}, each HTTP connection is passed to \texttt{http_read_request}. This routine in turn calls \texttt{static_buffer_read} as part of the loop on line 29 to get the user request string. The user input is placed into the 4096-byte buffer \texttt{conn->read_buf.buf} on line 30. Each read increments the variable \texttt{conn->read_buf.used} by the number of bytes read so far in order to prevent a buffer overflow. The read loop continues until \texttt{\/r\/n} is found, checked on line 34. If the user passes more than 4096 bytes without an HTTP end-of-line character, the read loop aborts and the server returns a 400 error status message on line 41. Each non-error request gets logged via the \texttt{serverlog} function. The vulnerability itself is in \texttt{serverlog}, which calls \texttt{fprintf} with a user specified format string (an HTTP request). Variadic functions such as \texttt{fprintf} use a format string specifier to determine how to walk the stack for arguments. An exploit for this vulnerability works by supplying format strings that cause \texttt{fprintf} to walk the stack to user-controlled data. The exploit then uses additional format specifiers to write to the desired location [23]. Figure 1b shows the stack layout of \texttt{orzHttpd} when the format string vulnerability is detected. The formatting argument in the \texttt{fprintf} call is a string of user-controlled bytes, thus leading to a real exploitable vulnerability [2]. We highlight several key points for finding exploitable bugs: - **Low-level details matter**: Determining exploitability requires that we reason about low-level details like return addresses and stack pointers. This is our motivation for focusing on binary-level techniques. - **There are an enormous number of paths**: In the example, there is a new path on every encounter of an if statement, which can lead to an exponential path explosion. Additionally, the number of paths in many portions of the code is related to the size of the input. For example, \texttt{memcmp} unfolds a loop, creating a new path for symbolic execution on each iteration. Longer inputs mean more conditions, more forks, and harder scalability challenges. Unfortunately most exploits are not short strings, e.g., in a buffer overflow typical exploits are hundreds or thousands of bytes long. - **The more checked paths, the better**: To reach the exploitable \texttt{fprintf} bug in the example, MAYHEM needs to reason through the loop, read input, fork a new interpreter for every possible path and check for errors. Without careful resource management, an engine can get bogged down with too many symbolic execution threads because of the huge number of possible execution paths. - **Execute as much natively as possible**: Symbolic execution is slow compared to concrete execution since the semantics of an instruction are simulated in software. In \texttt{orzHttpd}, millions of instructions set up the basic server before an attacker can even connect to a socket. We want to execute these instructions concretely and then switch to symbolic execution. The MAYHEM architecture for finding exploitable bugs is shown in Figure 2. The user starts MAYHEM by running: ``` mayhem --sym-net 80 400 ./orzhttpd ``` The command-line tells MAYHEM to symbolically execute \texttt{orzHttpd}, and open sockets on port 80 to receive symbolic 400-byte long packets. All remaining steps to create an exploit are performed automatically. MAYHEM consists of two concurrently running processes: a Concrete Executor Client (CEC), which executes code natively on a CPU, and a Symbolic Executor Server (SES). Both are shown in Figure 2. At a high level, the CEC runs on a target system, and the SES runs on any platform, waiting for connections from the CEC. The CEC takes in a binary program along with the potential symbolic sources (input specification) as an input, and begins communication with the SES. The SES then symbolically executes blocks that the CEC sends, and outputs several types of test cases including normal test cases, crashes, and exploits. The steps followed by MAYHEM to find the vulnerable code and generate an exploit are: 1) The \texttt{--sym-net 80 400} argument tells MAYHEM to perform symbolic execution on data read in from a socket on port 80. Effectively this is specifying which input sources are potentially under attacker control. MAYHEM can handle attacker input from environment variables, files, and the network. 2) The CEC loads the vulnerable program and connects to the SES to initialize all symbolic input sources. After the initialization, MAYHEM executes the binary concretely on the CPU in the CEC. During execution, the CEC instruments the code and performs dynamic taint analysis [24]. Our taint tracking engine checks if a block contains tainted instructions, where a block is a sequence of instructions that ends with a conditional jump or a call instruction. 3) When the CEC encounters a tainted branch condition or jump target, it suspends concrete execution. A tainted jump means that the target may be dependent on attacker control. input. The CEC sends the instructions to the SES and the SES determines which branches are feasible. The CEC will later receive the next branch target to explore from the SES. 4) The SES, running in parallel with the CEC, receives a stream of tainted instructions from the CEC. The SES jits the instructions to an intermediate language (§III), and symbolically executes the corresponding IL. The CEC provides any concrete values whenever needed, e.g., when an instruction operates on a symbolic operand and a concrete operand. The SES maintains two types of formulas: *Path Formula:* The path formula reflects the constraints to reach a particular line of code. Each conditional jump adds a new constraint on the input. For example, lines 32-33 create two new paths: one which is constrained so that the read input ends in \( x^n \) and line 35 is executed, and one where the input does not end in \( x^n \) and line 28 will be executed. *Exploitability Formula:* The exploitability formula determines whether i) the attacker can gain control of the instruction pointer, and ii) execute a payload. 5) When MAYHEM hits a tainted branch point, the SES decides whether we need to fork execution by querying the SMT solver. If we need to fork execution, all the new forks are sent to the path selector to be prioritized. Upon picking a path, the SES notifies the CEC about the change and the corresponding execution state is restored. If the system resource cap is reached, then the checkpoint manager starts generating checkpoints instead of forking off new executors (§IV). At the end of the process, test cases are generated for the terminated executors and the SES informs the CEC about which checkpoint should continue execution next. 6) During the execution, the SES switches context between executors and the CEC checkpoints/restore the provided execution state and continues execution. To do so, the CEC maintains a virtualization layer to handle the program interaction with the underlying system and checkpoint/restore between multiple program execution states (§IV-C). 7) When MAYHEM detects a tainted jump instruction, it builds an exploitability formula, and queries an SMT solver to see if it is satisfiable. A satisfying input will be, by construction, an exploit. If no exploit is found on the tainted branch instruction, the SES keeps exploring execution paths. 8) The above steps are performed at each branch until an exploitable bug is found. MAYHEM hits a user-specified maximum runtime, or all paths are exhausted. III. BACKGROUND **Binary Representation in our language.** Basic symbolic execution is performed on assembly instructions as they execute. In the overall system the stream comes from the CEC as explained earlier; here we assume they are simply given to us. We leverage BAP [16], an open-source binary analysis framework to convert x86 assembly to an intermediate language suitable for symbolic execution. For each instruction executed, the symbolic executor jits the instruction to the BAP IL. The SES performs symbolic execution directly on the IL, introduces additional constraints related to specific attack payloads, and sends the formula to an SMT solver to check satisfiability. For example, the IL for a `ret` instruction consists of two statements: one that loads an address from memory, and one that jumps to that address. **Symbolic Execution on the IL.** In concrete execution, the program is given a concrete value as input, it executes statements to produce new values, and terminates with final values. In symbolic execution we do not restrict execution to a single value, but instead provide a symbolic input variable that represents the set of all possible input values. The symbolic execution engine evaluates expressions for each statement in terms of the original symbolic inputs. When symbolic execution hits a branch, it considers two possible worlds: one where the true branch target is followed and one where the false branch target is followed. It does so by forking off an interpreter for each branch and asserting in the generated formula that the branch guard must be satisfied. The final formula encapsulates all branch conditions that must be met to execute the given path, thus is called the path formula or path predicate. In MAYHEM, each IL statement type has a corresponding symbolic execution rule. Assertions in the IL are immediately appended to the formula. Conditional jump statements create two formulas: one where the branch guard is asserted true, and one which asserts the negation of the guard and the false branch is followed. For example, if we already have formula \( f \) and execute `cjmp e_1, e_2, e_3` where \( e_1 \) is the branch guard and \( e_2 \) and \( e_3 \) are jump targets, then we create the two formulas: \[ \begin{align*} f \land e_1 \land \text{FSE}(\text{path}_{e_2}) \\ f \land \neg e_1 \land \text{FSE}(\text{path}_{e_3}) \end{align*} \] where `FSE` stands for forward symbolic execution of the jump target. Due to space, we give the exact semantics in a companion paper [16], [25]. IV. HYBRID SYMBOLIC EXECUTION MAYHEM is a hybrid symbolic execution system. Instead of running in pure online or offline execution mode, MAYHEM can alternate between modes. In this section we present the motivation and mechanics of hybrid execution. A. Previous Symbolic Execution Systems Offline symbolic execution—as found in systems such as SAGE [14] —requires two inputs: the target program and an initial seed input. In the first step, offline systems concretely execute the program on the seed input and record a trace. In we need to first re-execute a (potentially) very large number of instructions and disks. Nonetheless, since all execution states S2E shares common state between snapshots of physical memory and disks. Thus, to explore a different path, online execution simply needs to perform a context switch to the execution state of a suspended interpreter. A checkpoint contains the symbolic execution state of the suspended executor (path predicate, statistics, etc.) and replay information. The concrete execution state is discarded. When the online execution eventually finishes all active execution paths, MAYHEM moves to the next phase. 4. Checkpoint Restoration: The checkpoint manager selects a checkpoint based on a ranking heuristic IV-D and restores it in memory. Since the symbolic execution state was saved in the checkpoint, MAYHEM only needs to re-construct the concrete execution state. To do so, MAYHEM concretely executes the program using one satisfiable assignment of the path predicate as input, until the program reaches the instruction when the execution state was suspended. At that point, the concrete state is restored and the online exploration (phase 2) restarts. Note that phase 4 avoids symbolically re-executing instructions during the checkpoint restoration phase. Note that the term “checkpoint” differs from an offline execution “seed”, which is just a concrete input. (unlike standard concolic execution), and the re-execution happens concretely. Figure 3 shows the intuition behind hybrid execution. We provide a detailed comparison between online, offline, and hybrid execution in §VIII-C. C. Design and Implementation of the CEC The CEC takes in the binary program, a list of input sources to be considered symbolic, and an optional checkpoint input that contains execution state information from a previous run. The CEC concretely executes the program, hooks input sources and performs taint analysis on input variables. Every basic block that contains tainted instructions is sent to the SES for symbolic execution. As a response, the CEC receives the address of the next basic block to be executed and whether to save the current state as a restoration point. Whenever an execution path is complete, the CEC context-switches to an unexplored path selected by the SES and continues execution. The CEC terminates only if all possible execution paths have been explored or a threshold is reached. If we provide a checkpoint, the CEC first executes the program concretely until the checkpoint and then continues execution as before. Virtualization Layer. During an online execution run, the CEC handles multiple concrete execution states of the analyzed program simultaneously. Each concrete execution state includes the current register context, memory and OS state (the OS state contains a snapshot of the virtual filesystem, network and kernel state). Under the guidance of the SES and the path selector, the CEC context-switches between different concrete execution states depending on the symbolic executor that is currently active. The virtualization layer mediates all system calls to the host OS and emulates them. Keeping separate copies of the OS state ensures there are no side-effects across different executions. For instance, if one executor writes a value to a file, this modification will only be visible to the current execution state—other executors will have a separate instance of the same file. Efficient State Snapshot. Taking a full snapshot of the concrete execution state at every fork is very expensive. To mitigate the problem, CEC shares state across execution states—similar to other systems [10], [29]. Whenever execution forks, the new execution state reuses the state of the parent execution. Subsequent modifications to the state are recorded in the current execution. D. Design and Implementation of the SES The SES manages the symbolic execution environment and decides which paths are executed by the CEC. The environment consists of a symbolic executor for each path, a path selector which determines which feasible path to run next, and a checkpoint manager. The SES caps the number of symbolic executors to keep in memory. When the cap is reached, MAYHEM stops generating new interpreters and produces checkpoints; execution states that will explore program paths that MAYHEM was unable to explore in the first run due to the memory cap. Each checkpoint is prioritized and used by MAYHEM to continue exploration of these paths at a subsequent run. Thus, when all pending execution paths terminate, MAYHEM selects a new checkpoint and continues execution—until all checkpoints are consumed and MAYHEM exits. Each symbolic executor maintains two contexts (as state): a variable context containing all symbolic register values and temporaries, and a memory context keeping track of all symbolic data in memory. Whenever execution forks, the SES clones the current symbolic state (to keep memory low, we keep the execution state immutable to take advantage of copy-on-write optimizations—similar to previous work [10], [29]) and adds a new symbolic executor to a priority queue. This priority queue is regularly updated by our path selector to include the latest changes (e.g., which paths were explored, instructions covered, and so on). Preconditioned Symbolic Execution: MAYHEM implements preconditioned symbolic execution as in AEG [3]. In preconditioned symbolic execution, a user can optionally give a partial specification of the input, such as a prefix or length of the input, to reduce the range of search space. If a user does not provide a precondition, then SES tries to explore all feasible paths. This corresponds to the user providing the minimum amount of information to the system. Path Selection: MAYHEM applies path prioritization heuristics—as found in systems such as SAGE [14] and KLEE [10]—to decide which path should be explored next. Currently, MAYHEM uses three heuristic ranking rules: a) executors exploring new code (e.g., instead of executing known code more times) have high priority, b) executors that identify symbolic memory accesses have higher priority, and c) execution paths where symbolic instruction pointers are detected have the highest priority. The heuristics are designed to prioritize paths that are most likely to contain a bug. For instance, the first heuristic relies on the assumption that previously explored code is less likely to contain a bug than new code. E. Performance Tuning MAYHEM employs several optimizations to speed-up symbolic execution. We present three optimizations that were most effective: 1) independent formula, 2) algebraic simplifications, and 3) taint analysis. Similar to KLEE [10], MAYHEM splits the path predicate to independent formulas to optimize solver queries. A small implementation difference compared to KLEE is that MAYHEM keeps a map from input variables to formulas at all times. It is not constructed only for querying the solver (this representation allows more optimizations §V). MAYHEM also applies other standard optimizations as proposed by previous systems such as the constraint subsumption optimization [14], a counter-example cache [10] and others. MAYHEM also simplifies symbolic expressions and formulas by applying algebraic simplifications, e.g. \(x \oplus 0 = x, \& 0 = 0\), and so on. Recall from §IV-C, MAYHEM uses taint analysis [12], [24] to selectively execute instruction blocks that deal with symbolic data. This optimization gives a 8x speedup on average over executing all instruction blocks (see §VIII-G). V. INDEX-BASED MEMORY MODELING MAYHEM introduces an index-based memory model as a practical approach to handling symbolic memory loads. The index-based model allows MAYHEM to adapt its treatment of symbolic memory based on the value of the index. In this section we present the entire memory model of MAYHEM. MAYHEM models memory as a map \(\mu : I \rightarrow E\) from 32-bit indices \(i\) to expressions \(e\). In a \texttt{load}\((\mu, i)\) expression, we say that index \(i\) indexes memory \(\mu\), and the loaded value \(e\) represents the contents of the \(i\)th memory cell. A load with a concrete index \(i\) is directly translated by MAYHEM into an appropriate lookup in \(\mu\) (i.e., \(\mu[i]\)). A \texttt{store}\((\mu, i, e)\) instruction results in a new memory \(\mu[i \leftarrow e]\) where \(i\) is mapped to \(e\). A. Previous Work & Symbolic Index Modeling A symbolic index occurs when the index used in a memory lookup is not a number, but an expression—a pattern that appears very frequently in binary code. For example, a C \texttt{switch}(c) statement is compiled down to a jump-table lookup where the input character \(c\) is used as the index. Standard string conversion functions (such as ASCII to Unicode and vice versa, \texttt{to_lower}, \texttt{to_upper}, etc.) are all in this category. Handling arbitrary symbolic indices is notoriously hard, since a symbolic index may (in the worst case) reference any cell in memory. Previous research and state-of-the-art tools indicate that there are two main approaches for handling a symbolic index: a) concretizing the index and b) allowing memory to be fully symbolic. First, concretizing means instead of reasoning about all possible values that could be indexed in memory, we \texttt{concretize} the index to a single specific address. This concretization can reduce the complexity of the produced formulas and improve solving/exploration times. However, constraining the index to a single value may cause us to miss paths—for instance, if they depend on the value of the index. Concretization is the natural choice for offline executors, such as SAGE [14] or BitBlaze [6], since only a single memory address is accessed during concrete execution. Reasoning about all possible indices is also possible by treating memory as fully symbolic. For example, tools such as McVeto [28], BAP [16] and BitBlaze [6] offer capabilities to handle symbolic memory. The main tradeoff—when compared with the concretization approach—is performance. Formulas involving symbolic memory are more expressive, thus solving/exploration times are usually higher. B. Memory Modeling in MAYHEM The first implementation of MAYHEM followed the simple concretization approach and concretized all memory indices. This decision proved to be severely limiting in that selecting a single address for the index usually did not allow us to satisfy the exploit payload constraints. Our experiments show that 40% of the examples require us to handle symbolic memory—simple concretization was insufficient (see §VIII). The alternative approach was symbolic memory. To avoid the scalability problems associated with fully symbolic memory, MAYHEM models memory partially, where writes are always concretized, but symbolic reads are allowed to be modeled symbolically. In the rest of this section we describe the index-based memory model of MAYHEM in detail, as well as some of the key optimizations. Memory Objects. To model symbolic reads, MAYHEM introduces memory objects. Similar to the global memory \(\mu\), a memory object \(M\) is also a map from 32-bit indices to expressions. Unlike the global memory however, a memory object is immutable. Whenever a symbolic index is used to read memory, MAYHEM generates a fresh memory object \(M\) that contains all values that could be accessed by the index—\(M\) is a partial snapshot of the global memory. Using the memory object, MAYHEM can reduce the evaluation of a \texttt{load}\((\mu, i)\) expression to \(\mu[i]\). Note, that this is semantically equivalent to returning \(\mu[i]\). The key difference is in the size of the symbolic array we introduce in the formula. In most cases, the memory object \(M\) will be orders of magnitude smaller than the entire memory \(\mu\). Memory Object Bounds Resolution. Instantiating the memory object requires MAYHEM to find all possible values of a symbolic index \(i\). In the worst case, this may require up to \(2^{32}\) queries to the solver (for 32-bit memory addresses). To tackle this problem MAYHEM exchanges some accuracy for scalability by resolving the bounds \([L, U]\) of the memory region—where \(L\) is the lower and \(U\) is the upper bound of the index. The bounds need to be conservative, i.e., all possible values of the index should be within the \([L, U]\) interval. Note that the memory region does not need to be continuous, for example \(i\) might have only two realizable values (\(L\) and \(U\)). To obtain these bounds MAYHEM uses the solver to perform binary search on the value of the index in the context of the current path predicate. For example, initially for the lowest bound of a 32-bit \(i\): \(L \in [0, 2^{32} - 1]\). If \(i < \frac{2^{32} - 1}{2}\) is satisfiable then \(L \in [0, \frac{2^{32} - 1}{2}]\) while unsatisfiability indicates that \(L \in [\frac{2^{32} - 1}{2}, 2^{32} - 1]\). We repeat the process until we recover both bounds. Using the bounds we can now instantiate the memory object (using a fresh symbolic array \(M\)) as follows: \(\forall i \in [L, U] : M[i] = \mu[i]\). The bounds resolution algorithm described above is sufficient to generate a conservative representation of memory objects and allow MAYHEM to reason about symbolic memory reads. In the rest of the section we detail the main optimization techniques MAYHEM includes to tackle some of the caveats of the original algorithm: - Querying the solver on every symbolic memory dereference is expensive. Even with binary search, identifying both bounds of a 32-bit index required \( \sim 54 \) queries on average (§VIII) (§V-B1,§V-B2,§V-B3). - The memory region may not be continuous. Even though many values between the bounds may be infeasible, they are still included in the memory object, and consequently, in the formula (§V-B2). - The values within the memory object might have structure. By modeling the object as a single byte array we are missing opportunities to optimize our formulas based on the structure. (§V-B4,§V-B5). - In the worst case, a symbolic index may access any possible location in memory (§V-C). 1) **Value Set Analysis (VSA):** MAYHEM employs an online version of VSA [5] to reduce the solver load when resolving the bounds of a symbolic index \( i \). VSA returns a strided interval for the given symbolic index. A strided interval represents a set of values in the form \( S[\mathcal{L}, \mathcal{U}] \), where \( S \) is the stride and \( \mathcal{L}, \mathcal{U} \) are the bounds. For example, the interval \( 2[1, 5] \) represents the set \{1, 3, 5\}. The strided interval output by VSA will be an over-approximation of all possible values the index might have. For instance, \( i = (1 + \text{byte}) \ll 1 \) — where \text{byte} is a symbolic byte with an interval \([0, 255]\) — results in an interval: \( VSA(i) = 2[2, 512] \). The strided interval produced by VSA is then refined by the solver (using the same binary-search strategy) to get the tight lower and upper bounds of the memory object. For instance, if the path predicate asserts that \text{byte} < 32, then the interval for the index \( (1 + \text{byte}) \ll 1 \) can be refined to \( 2[2, 64] \). Using VSA as a preprocessing step has a cascading effect on our memory modeling: a) we perform 70% less queries to resolve the exact bounds of the memory object (§VIII), b) the strided interval can be used to eliminate impossible values in the \([\mathcal{L}, \mathcal{U}]\) region, thus making formulas simpler, and c) the elimination can trigger other optimizations (see §V-B5). 2) **Refinement Cache:** Every VSA interval is refined using solver queries. The refinement process can still be expensive (for instance, the over-approximation returned by VSA might be too coarse). To avoid repeating the process for the same intervals, MAYHEM keeps a cache mapping intervals to potential refinements. Whenever we get a cache hit, we query the solver to check whether the cached refinement is accurate for the current symbolic index, before resorting to binary-search for refinement. The refinement cache can reduce the number of bounds-resolution queries by 82% (§VIII). 3) **Lemma Cache:** Checking an entry of the refinement cache still requires solver queries. MAYHEM uses another level of caching to avoid repeatedly querying \( \alpha \)-equivalent formulas, i.e., formulas that are structurally equivalent up to variable renaming. To do so, MAYHEM converts queried formulas to a canonical representation (F) and caches the query results (Q) in the form of a lemma: \( F \rightarrow Q \). The answer for any formula mapping to the same canonical representation is retrieved immediately from the cache. The lemma cache can reduce the number of bounds-resolution queries by up to 96% (§VIII). The effectiveness of this cache depends on the independent formulas optimization §IV-E. The path predicate has to be represented as a set of independent formulas, otherwise any new formula addition to the current path predicate would invalidate all previous entries of the lemma cache. 4) **Index Search Trees (ISTS):** Any value loaded from a memory object \( M \) is symbolic. To resolve constraints involving a loaded value \( (M[i]) \), the solver needs to both find an entry in the object that satisfies the constraints and ensure that the index to the object entry is realizable. To lighten the burden on the solver, MAYHEM replaces memory object lookup expressions with index search trees (ISTS). An IST is a binary search tree where the symbolic index is the key and the leaf nodes contain the entries of the object. The entire tree is encoded in the formula representation of the load expression. More concretely, given a (sorted by address) list of entries \( E \) within a memory object \( M \), a balanced IST for a symbolic index \( i \) is defined as: \( IST(E) = \text{ite}(i < addr(E_{\text{right}}), E_{\text{left}}, E_{\text{right}}) \), where \text{ite} represents an if-then-else expression, \( E_{\text{left}}(E_{\text{right}}) \) represents the left (right) half of the initial entries \( E \), and \( addr(\cdot) \) returns the lowest address of the given entries. For a single entry the IST returns the entry without constructing any \text{ite} expressions. Note that the above definition constructs a balanced IST. We could instead construct the IST with nested \text{ite} expressions—making the formula depth \( O(n) \) in the number of object entries instead of \( O(\log n) \). However, our experimental results show that a balanced IST is \( 4 \times \) faster than a nested IST (§VIII). Figure 5 shows how MAYHEM constructs the IST when given the entries of a memory object (the \text{to_lower} conversion table) with a single symbolic character as the index. 5) Bucketization with Linear Functions: The IST generation algorithm creates a leaf node for each entry in the memory object. To reduce the number of entries, MAYHEM performs an extra preprocessing step before passing the object to the IST. The idea is that we can use the memory object structure to combine multiple entries into a single bucket. A bucket is an index-parameterized expression that returns the value of the memory object for every index within a range. MAYHEM uses linear functions to generate buckets. Specifically, MAYHEM sweeps all entries within a memory object and joins consecutive points \(( (\text{index}, \text{value}) \text{ tuples} ) \) into lines, a process we call linearization. Any two points can form a line \( y = ax + \beta \). Follow-up points \((i, v_i)\) will be included in the same line if \( u_i = \alpha i_i + \beta \). At the end of linearization, the memory object is split into a list of buckets, where each bucket is either a line or an isolated point. The list of buckets can now be passed to the IST algorithm. Figure 5 shows the \text{to_lower} IST after applying linearization. Linearization effectively reduces the number of leaf nodes from 256 to 3. The idea of using linear functions to simplify memory lookups comes from a simple observation: linear-like patterns appear frequently for several operations at the binary level. For example, jump tables generated by switch statements, conversion and translation tables (e.g., ASCII to Unicode and vice versa) all contain values that are scaling linearly with the index. C. Prioritized Concretization. Modeling a symbolic load using a memory object is beneficial when the size of the memory object is significantly smaller than the entire memory \( |M| \ll |\mu| \). Thus, the above optimizations are only activated when the size of the memory object, approximated by the range, is below a threshold \( |M| < 1024 \) in our experiments. Whenever the memory object size exceeds the threshold, MAYHEM will concretize the index used to access it. However, instead of picking a satisfying value at random, MAYHEM attempts to prioritize the possible concretization values. Specifically, for every symbolic pointer, MAYHEM performs three checks: 1) Check if it is possible to redirect the pointer to unmapped memory under the context of the current path predicate. If true, MAYHEM will generate a crash test case for the satisfying value. 2) Check if it is possible to redirect the symbolic pointer to symbolic data. If it is, MAYHEM will redirect (and concretize) the pointer to the least constrained region of the symbolic data. By redirecting the pointer towards the least constrained region, MAYHEM tries to avoid loading overconstrained values, thus eliminating potentially interesting paths that depend on these values. To identify the least constrained region, MAYHEM splits memory into symbolic regions, and sorts them based on the complexity of constraints associated with each region. 3) If all of the above checks fail, MAYHEM concretizes the index to a valid memory address and continues execution. The above steps infer whether a symbolic expression is a pointer, and if so, whether it is valid or not (e.g., NULL). For example, Figure 6 contains a buffer overflow at line 9. However, an attacker is not guaranteed to hijack control even if \text{strcpy} overwrites the return address. The program needs to reach the return instruction to actually transfer control. However, at line 10 the program performs two dereferences both of which need to succeed (i.e., avoid crashing the program) to reach line 11 (note that pointer \text{ptr} is already overwritten with user data). MAYHEM augmented with prioritized concretization will generate 3 distinct test cases: 1) a crash test case for an invalid dereference of pointer \text{ptr}, 2) a crash test case where dereferencing pointer \text{bar} fails after successfully redirecting \text{ptr} to symbolic data, and 3) an exploit test case, where both dereferences succeed and user input hijacks control of the program. Figure 6 shows the memory layout for the third test case. VI. EXPLOIT GENERATION MAYHEM checks for two exploitable properties: a symbolic (tainted) instruction pointer, and a symbolic format string. Each property corresponds to a buffer overflow and format string attack respectively. Whenever any of the two exploitable policies are violated, MAYHEM generates an exploitable formula and tries to find a satisfying answer, i.e., an exploit. MAYHEM can generate both local and remote attacks. Our generic design allows us to handle both types of attacks similarly. For Windows, MAYHEM detects overwritten Structured Exception Handler (SEH) on the stack when an exception occurs, and tries to create an SEH-based exploit. Buffer Overflows: MAYHEM generates exploits for any possible instruction-pointer overwrite, commonly triggered by a buffer overflow. When MAYHEM finds a symbolic instruction pointer, it first tries to generate jump-to-register exploits, similar to previous work [15]. For this type of exploit, the instruction pointer should point to a trampoline, e.g., jmp %eax, and the register, e.g. %eax, should point to a place in memory where we can place our shellcode. By encoding those constraints into the formula, MAYHEM is able to query the solver for a satisfying answer. If an answer exists, we proved that the bug is exploitable. If we can’t generate a jump-to-register exploit, we try to generate a simpler exploit by making the instruction pointer point directly to a place in memory where we can place shellcode. Format String Attacks: To identify and generate format string attacks, MAYHEM checks whether the format argument of format string functions, e.g., printf, contains any symbolic bytes. If any symbolic bytes are detected, it tries to place a format string payload within the argument that will overwrite the return address of the formatting function. VII. IMPLEMENTATION MAYHEM consists of about 27,000 lines of C/C++ and OCaml code. Our binary instrumentation framework was built on Pin [19] and all the hooks for modeled system and API calls were written in C/C++. The symbolic execution engine is written solely in OCaml and consists of about 10,000 lines of code. We rely on BAP [16] to convert assembly instructions to the IL. We use Z3 [13] as our decision procedure, for which we built direct OCaml bindings. To allow for remote communication between the two components we implemented our own cross-platform, light-weight RPC protocol (both in C++ and OCaml). Additionally, to compare between different symbolic execution modes, we implemented all three: online, offline and hybrid. VIII. EVALUATION A. Experimental Setup We evaluated our system on 2 virtual machines running on a desktop with a 3.40GHz Intel(R) Core i7-2600 CPU and 16GB of RAM. Each VM had 4GB RAM and was running Debian Linux (Squeeze) VM and Windows XP SP3 respectively. Figure 7: Memory use in online, offline, and hybrid mode. B. Exploitable Bug Detection We downloaded 29 different vulnerable programs to check the effectiveness of MAYHEM. Table I summarizes our results. Experiments were performed on stripped unmodified binaries on both Linux and Windows. One of the Windows applications MAYHEM exploited (Dizzy) was a packed binary. Column 3 shows the type of exploits that MAYHEM detected as we described in §VI. Column 4 shows the symbolic sources that we considered for each program. There are examples from all the symbolic input sources that MAYHEM supports, including command-line arguments (Arg.), environment variables (Env. Vars), network packets (Network) and symbolic files (Files). Column 5 is the size of each symbolic input. Column 6 describes the precondition types that we provided to MAYHEM, for each of the 29 programs. They are split into three categories: length, prefix and crashing input as described in §IV-D. Column 7 shows the advisory reports for all the demonstrated exploits. In fact, MAYHEM found 2 zero-day exploits for two Linux applications, both of which we reported to the developers. The last column contains the exploit generation time for the programs that MAYHEM analyzed. We measured the exploit generation time as the time taken from the start of analysis until the creation of the first working exploit. The time required varies greatly with the complexity of the application and the size of symbolic inputs. The fastest program to exploit was the Linux wireless configuration utility iwconfig in 1.90 seconds and the longest was the Windows program Dizzy, which took about 4 hours. C. Scalability of Hybrid Symbolic Execution We measured the effectiveness of hybrid symbolic execution across two scaling dimensions: memory use and speed. Less Memory-Hungry than Online Execution. Figure 7 shows the average memory use of MAYHEM over time while analyzing a utility in coreutils (echo) with online, offline and hybrid execution. After a few minutes, online <table> <thead> <tr> <th>Program</th> <th>Exploit Type</th> <th>Input Source</th> <th>Symbolic Input Size</th> <th>Symb. Mem.</th> <th>Precondition</th> <th>Advisory ID.</th> <th>Exploit Gen. Time (s)</th> </tr> </thead> <tbody> <tr> <td>A2ps</td> <td>Stack Overflow</td> <td>Env. Vars</td> <td>550</td> <td>✓</td> <td>length</td> <td>EDB-ID-816</td> <td>189</td> </tr> <tr> <td>Aeon</td> <td>Stack Overflow</td> <td>Env. Vars</td> <td>1000</td> <td>✓</td> <td>length</td> <td>CVE-2005-1019</td> <td>10</td> </tr> <tr> <td>Aspell</td> <td>Stack Overflow</td> <td>Stdin</td> <td>750</td> <td>✓</td> <td>crashing</td> <td>CVE-2004-0548</td> <td>82</td> </tr> <tr> <td>Athttpd</td> <td>Stack Overflow</td> <td>Network</td> <td>800</td> <td>✓</td> <td>crashing</td> <td>CVE-2000-1816</td> <td>209</td> </tr> <tr> <td>FreeRadius</td> <td>Stack Overflow</td> <td>Env.</td> <td>9000</td> <td>✓</td> <td>length</td> <td>Zero-Day</td> <td>133</td> </tr> <tr> <td>GhostScript</td> <td>Stack Overflow</td> <td>Arg.</td> <td>2000</td> <td>✓</td> <td>prefix</td> <td>CVE-2010-2055</td> <td>18</td> </tr> <tr> <td>Giftpd</td> <td>Stack Overflow</td> <td>Arg.</td> <td>300</td> <td>✓</td> <td>length</td> <td>OSVDB-ID-16373</td> <td>4</td> </tr> <tr> <td>Gnugol</td> <td>Stack Overflow</td> <td>Env.</td> <td>3200</td> <td>✓</td> <td>length</td> <td>Zero-Day</td> <td>22</td> </tr> <tr> <td>Hitget</td> <td>Stack Overflow</td> <td>Env. vars</td> <td>350</td> <td>✓</td> <td>length</td> <td>N/A</td> <td>7</td> </tr> <tr> <td>Htpasswd</td> <td>Stack Overflow</td> <td>Arg.</td> <td>400</td> <td>✓</td> <td>prefix</td> <td>OSVDB-ID-10068</td> <td>4</td> </tr> <tr> <td>Iwconfig</td> <td>Stack Overflow</td> <td>Arg.</td> <td>400</td> <td>✓</td> <td>length</td> <td>CVE-2003-0947</td> <td>2</td> </tr> <tr> <td>Mbse-bhs</td> <td>Stack Overflow</td> <td>Env. vars</td> <td>4200</td> <td>✓</td> <td>length</td> <td>CVE-2007-0368</td> <td>362</td> </tr> <tr> <td>nCompress</td> <td>Stack Overflow</td> <td>Arg.</td> <td>1400</td> <td>✓</td> <td>length</td> <td>CVE-2001-1413</td> <td>11</td> </tr> <tr> <td>OrzHttpd</td> <td>Format String</td> <td>Network</td> <td>400</td> <td>✓</td> <td>length</td> <td>OSVDB-ID-60944</td> <td>6</td> </tr> <tr> <td>PST-utils</td> <td>Stack Overflow</td> <td>Arg.</td> <td>300</td> <td>✓</td> <td>length</td> <td>EDB-ID-890</td> <td>46</td> </tr> <tr> <td>Rsync</td> <td>Stack Overflow</td> <td>Env. Vars</td> <td>100</td> <td>✓</td> <td>length</td> <td>CVE-2004-2093</td> <td>8</td> </tr> <tr> <td>SharUtils</td> <td>Format String</td> <td>Arg.</td> <td>300</td> <td>✓</td> <td>prefix</td> <td>OSVDB-ID-10255</td> <td>17</td> </tr> <tr> <td>Socat</td> <td>Format String</td> <td>Arg.</td> <td>600</td> <td>✓</td> <td>prefix</td> <td>CVE-2004-1484</td> <td>47</td> </tr> <tr> <td>Squirrel Mail</td> <td>Stack Overflow</td> <td>Arg.</td> <td>150</td> <td>✓</td> <td>length</td> <td>CVE-2004-0524</td> <td>2</td> </tr> <tr> <td>Tipx</td> <td>Format String</td> <td>Arg.</td> <td>250</td> <td>✓</td> <td>length</td> <td>OSVDB-ID-12346</td> <td>10</td> </tr> <tr> <td>xGalaga</td> <td>Stack Overflow</td> <td>Env. Vars</td> <td>300</td> <td>✓</td> <td>length</td> <td>CVE-2003-0454</td> <td>3</td> </tr> <tr> <td>Xtokkaetama</td> <td>Stack Overflow</td> <td>Arg.</td> <td>100</td> <td>✓</td> <td>crashing</td> <td>OSVDB-ID-2343</td> <td>10</td> </tr> <tr> <td>Coolplayer</td> <td>Stack Overflow</td> <td>Files</td> <td>210</td> <td>✓</td> <td>crashing</td> <td>CVE-2008-3408</td> <td>164</td> </tr> <tr> <td>Destiny</td> <td>Stack Overflow</td> <td>Files</td> <td>2100</td> <td>✓</td> <td>crashing</td> <td>OSVDB-ID-53249</td> <td>963</td> </tr> <tr> <td>Dizzy</td> <td>Stack Overflow (SEH)</td> <td>Arg.</td> <td>519</td> <td>✓</td> <td>crashing</td> <td>EDB-ID-15566</td> <td>13,260</td> </tr> <tr> <td>GAAlan</td> <td>Stack Overflow</td> <td>Files</td> <td>1500</td> <td>✓</td> <td>prefix</td> <td>OSVDB-ID-60897</td> <td>831</td> </tr> <tr> <td>GSPlayer</td> <td>Stack Overflow</td> <td>Files</td> <td>400</td> <td>✓</td> <td>crashing</td> <td>OSVDB-ID-69006</td> <td>120</td> </tr> <tr> <td>Muse</td> <td>Stack Overflow</td> <td>Files</td> <td>250</td> <td>✓</td> <td>crashing</td> <td>OSVDB-ID-67277</td> <td>481</td> </tr> <tr> <td>Soritong</td> <td>Stack Overflow (SEH)</td> <td>Files</td> <td>1000</td> <td>✓</td> <td>crashing</td> <td>CVE-2009-1643</td> <td>845</td> </tr> </tbody> </table> Table I: List of programs that MAYHEM demonstrated as exploitable. Execution reaches the maximum number of live interpreters and starts terminating execution paths. At this point, the memory keeps increasing linearly as the paths we explore become deeper. Note that at the beginning, hybrid execution consumes as much memory as online execution without exceeding the memory threshold, and utilizes memory resources more aggressively than offline execution throughout the execution. Offline execution requires much less memory (less than 500KB on average), but at a performance cost, as demonstrated below. **Faster than Offline Execution.** Figure 8 shows the exploration time for `/bin/echo` using different limits on the maximum number of running executors. For this experiment, we use 6 bytes of symbolic arguments to explore the entire input space in a reasonable amount of time. When the maximum number of running executors is 1, it means MAYHEM will produce a disk checkpoint—the average checkpoint size was 30KB—for every symbolic branch. thus is equivalent to offline execution. When the maximum number of running executors was 128 or above, MAYHEM did not have to checkpoint to disk, thus is equivalent to an online executor. As a result, online execution took around 25 seconds to explore the input space while offline execution needed 1,400 seconds. Online was $56 \times$ faster than offline in this experiment. We identified two major reasons for this performance boost. First, the re-execution cost is higher than context-switching between two execution states (§IV-B). MAYHEM spent more than 25% of the time re-executing previous paths in the offline scheme. For the online case, 2% of the time was spent context-switching. Second, online is more cache-efficient than offline execution in our implementation. Specifically, online execution makes more efficient use of the Pin code cache [19] by switching between paths in-memory during a single execution. As a result, the code cache made online execution $40 \times$ faster than offline execution. Additionally, we ran a Windows GUI program (MiniShare) to compare the throughput between offline and hybrid execution. We chose this program because it does not require user interaction (e.g., mouse click), to start symbolic execution. We ran the program for 1 hour for each evolutionary mode. Hybrid execution was $10 \times$ faster than online evolution. D. Handling Symbolic Memory in Real-World Applications Recall from §V, index-based memory modeling enables MAYHEM to reason about symbolic indices. Our experiments from Table I show that more than 40% of the programs required symbolic memory modeling (column 6) to exploit. In other words, MAYHEM—after several hours of analysis—was unable to generate exploits for these programs without index- based memory modeling. To understand why, we evaluated our index-based memory modeling optimizations on the atphttpd server. Bounds Resolution Table II shows the time taken by MAYHEM to find a vulnerability in atphttpd using different levels of optimizations for the bounds resolution algorithm. The times include exploit detection but not exploit generation time (since it is not affected by the bounds resolution algorithm). Row 3 shows that VSA reduces the average number of queries to the SMT solver from $\sim 54$ to $\sim 14 queries per symbolic memory access, and reduces the total time by 75%. Row 4 shows shows the number of queries when the refinement cache (R cache) is enabled on top of VSA. The R cache reduces the number of necessary binary searches to from 4003 to 7, resulting in a 57% speedup. The last row shows the effect of the lemma cache (L cache) on top of the other optimizations. The L cache takes most of the burden off the R cache, thus resulting in an additional 59% speedup. We do not expect the L cache to always be that efficient, since it relies heavily on the independence of formulas in the path predicate. The cumulative speedup was 96%. Index Search Tree Representation. Recall from §V-B MAYHEM models symbolic memory loads as ISTs. To show the effectiveness of this optimization we ran atphttpd with three different formula representations (shown in Table III). The balanced IST was more than $4 \times$ faster than the unbalanced binary tree representation, and with linearization of the formula we obtained a cumulative $9 \times$ speedup. Note, that with symbolic arrays (no ISTs) we were unable to detect an exploit within the time limit. E. MAYHEM Coverage Comparison To evaluate MAYHEM’s ability to cover new paths, we downloaded an open-source symbolic executor (KLEE) to compare the performance against MAYHEM. Note KLEE runs on source, while MAYHEM on binary. We measured the code coverage of 25 coreutils applications as a function of time. MAYHEM ran for one hour, at most, on each of those applications. We used the generated test cases to measure the code coverage using the GNU gcov <table> <thead> <tr> <th>L Hits</th> <th>R Hits</th> <th>Misses</th> <th># Queries</th> <th>Time (sec)</th> </tr> </thead> <tbody> <tr> <td>No opt.</td> <td>N/A</td> <td>N/A</td> <td>217,179</td> <td>1,841</td> </tr> <tr> <td>+ VSA</td> <td>N/A</td> <td>N/A</td> <td>49,424</td> <td>437</td> </tr> <tr> <td>+ R cache</td> <td>N/A</td> <td>3906</td> <td>7</td> <td>10,331</td> </tr> <tr> <td>+ L cache</td> <td>3940</td> <td>56</td> <td>7</td> <td>242</td> </tr> </tbody> </table> Table II: Effectiveness of bounds resolution optimizations. The L and R caches are respectively the Lemma and Refinement caches as defined in §V. <table> <thead> <tr> <th>Formula Representation</th> <th>Time (sec.)</th> </tr> </thead> <tbody> <tr> <td>Unbalanced binary tree</td> <td>1,754</td> </tr> <tr> <td>Balanced binary tree</td> <td>425</td> </tr> <tr> <td>Balanced binary tree + Linearization</td> <td>192</td> </tr> </tbody> </table> Table III: Performance comparison for different IST representations. ### Table IV: AEG comparison: binary-only execution requires more instructions. <table> <thead> <tr> <th>Program</th> <th>AEG Time</th> <th>AEG LLVM Time</th> <th>MAYHEM Time</th> <th>MAYHEM LLVM Time</th> <th>Tainted ASM</th> <th>Tainted IL</th> </tr> </thead> <tbody> <tr> <td>iwconfig</td> <td>0.506s</td> <td>10,876</td> <td>1.90s</td> <td>394,876</td> <td>2,200</td> <td>12,893</td> </tr> <tr> <td>aspell</td> <td>8.698s</td> <td>87,056</td> <td>24.62s</td> <td>696,275</td> <td>26,647</td> <td>133,620</td> </tr> <tr> <td>aeon</td> <td>2.188s</td> <td>18,539</td> <td>9.67s</td> <td>623,684</td> <td>7,087</td> <td>43,804</td> </tr> <tr> <td>lqbd</td> <td>0.864s</td> <td>12,776</td> <td>6.76s</td> <td>576,005</td> <td>2,670</td> <td>16,391</td> </tr> <tr> <td>ipxd</td> <td>2.343s</td> <td>82,030</td> <td>9.91s</td> <td>647,498</td> <td>2,043</td> <td>19,198</td> </tr> <tr> <td>ncompress</td> <td>5.511s</td> <td>60,860</td> <td>11.30s</td> <td>583,330</td> <td>8,778</td> <td>71,195</td> </tr> </tbody> </table> Figure 10: Exploit generation time versus precondition size. utility. The results are shown in Figure 9. We used the 21 tools with the smallest code size, and 4 bigger tools that we selected. MAYHEM achieved a 97.56% average coverage per application and got 100% coverage on 13 tools. For comparison, KLEE achieved 100% coverage on 12 coreutils without simulated system call failures (to have the same configuration as MAYHEM). Thus, MAYHEM seems to be competitive with KLEE for this data set. Note that MAYHEM is not designed specifically for maximizing code coverage. However, our experiments provide a rough comparison point against other symbolic executors. ### F. Comparison against AEG We picked 8 different programs from the AEG working examples [3] and ran both tools to compare exploit generation times on each of those programs using the same configuration (Table IV). MAYHEM was on average 3.4× slower than AEG. AEG uses source code, thus has the advantage of operating at a higher-level of abstraction. At the binary level, there are no types and high-level structures such as functions, variables, buffers and objects. The number of instructions executed (Table IV) is another factor that highlights the difference between source and binary-only analysis. Considering this, we believe this is a positive and competitive result for MAYHEM. #### Precondition Size As an additional experiment, we measured how the presence of a precondition affects exploit generation times. Specifically, we picked 6 programs that require a crashing input to find an exploitable bug and started to iteratively decrease the size of the precondition and measured exploit generation times. Figure 10 summarizes our results in terms of normalized precondition sizes—for example, a normalized precondition of 70% for a 100-byte crashing input means that we provide 70 bytes of the crashing input as a precondition to MAYHEM. While the behavior appeared to be program-dependent, in most of the programs we observed a sudden phase-transition, where the removal of a single character could cause MAYHEM to not detect the exploitable bug within the time limit. We believe this to be an interesting topic for future work in the area. ### G. Performance Tuning #### Formula Optimizations Recall from §IV-E MAYHEM uses various optimization techniques to make solver queries faster. To compare against our optimized version of MAYHEM, we turned off some or all of these optimizations. We chose 15 Linux programs to evaluate the speedup obtained with different levels of optimizations turned on. Figure 11 shows the head-to-head comparison (in exploit finding and generation times) between 4 different formula optimization options. Algebraic simplifications usually speed up our analysis and offer an average speedup of 10% for the 15 test programs. Significant speedups occur when the independent formula optimization is turned on along with simplifications, offering speedups of 10-100×. Z3 supports incremental solving, so as an additional experiment, we measured the exploit generation time with Z3 in incremental mode. In most cases solving times for incremental formulas are comparable to the times we obtain with the independent formulas optimization. In fact, in half of our examples (7 out of 15) incremental formulas outperform independent formulas. In contrast to previous results, this implies that using the solver in incremental mode can alleviate the need for many formula simplifications and optimizations. A downside of using the solver in incremental mode was that it made our symbolic execution state mutable—and thus was less memory efficient during our long-running tests. #### Tainted Instructions Only tainted instruction blocks are evaluated symbolically by MAYHEM—all other blocks are executed natively. Figure 12 shows the percentage of tainted instructions for 24 programs (taken from Table I). More than 95% of instructions were not tainted in our sample programs, and this optimization gave about 8× speedup on average. IX. DISCUSSION Most of the work presented in this paper focuses on exploitable bug finding. However, we believe that the main techniques can be adapted to other application domains under the context of symbolic execution. We also believe that our hybrid symbolic execution and index-based memory modeling represent new points in the design space of symbolic execution. We stress that the intention of MAYHEM is informing a user that an exploitable bug exists. The exploit produced is intended to demonstrate the severity of the problem, and to help debug and address the underlying issue. MAYHEM makes no effort to bypass OS defenses such as ASLR and DEP, which will likely protect systems against exploits we generate. However, our previous work on Q [26] shows that a broken exploit (that no longer works because of ASLR and DEP), can be automatically transformed—with high probability—into an exploit that bypasses both defenses on modern OSes. While we could feed the exploits generated by MAYHEM directly into Q, we do not explore this possibility in this paper. Limitations: MAYHEM does not have models for all system/library calls. The current implementation models about 30 system calls in Linux, and 12 library calls in Windows. To analyze larger and more complicated programs, more system calls need to be modeled. This is an artifact of performing per-process symbolic execution. Whole-system symbolic executors such as S2E [29] or BitBlaze [6] can execute both user and kernel code, and thus do not have this limitation. The down-side is that whole-system analysis can be much more expensive, because of the higher state restoration cost and the time spent analyzing kernel code. Another limitation is that MAYHEM can currently analyze only a single execution thread on every run. MAYHEM cannot handle multi-threaded programs when threads interact with each other (through message-passing or shared memory). Last, MAYHEM executes only tainted instructions, thus it is subject to all the pitfalls of taint analysis, including undertainting, overtainting and implicit flows [25]. Future Work: Our experiments show that MAYHEM can generate exploits for standard vulnerabilities such as stack-based buffer overflows and format strings. An interesting future direction is to extend MAYHEM to handle more advanced exploitation techniques such as exploiting heap-based buffer overflows, use-after-free vulnerabilities, and information disclosure attacks. At a high level, it should be possible to detect such attacks using safety properties similar to the ones MAYHEM currently employs. However, it is still an open question how the same techniques can scale and detect such exploits in bigger programs. X. RELATED WORK Brumley et al. [8] introduced the automatic patch-based exploit generation (APEG) challenge. APEG used the patch to point out the location of the bug and then used slicing to construct a formula for code paths from input source to vulnerable line. MAYHEM finds vulnerabilities and vulnerable code paths itself. In addition, APEG’s notion of an exploit is more abstract: any input that violates checks introduced by the path are considered exploits. Here we consider specifically control flow hijack exploits, which were not automatically generated by APEG. Heelan [15] was the first to describe a technique that takes in a crashing input for a program, along with a jump register, and automatically generates an exploit. Our research explores the state space to find such crashing inputs. AEG [3] was the first system to tackle the problem of both identifying exploitable bugs and automatically generating exploits. AEG worked solely on source code and introduced preconditioned symbolic execution as a way to focus symbolic execution towards a particular part of the search space. MAYHEM is a logical extension of AEG to binary code. In practice, working on binary code opens up automatic exploit generation to a wider class of programs and scenarios. There are several binary-only symbolic execution frameworks such as Bouncer [11], BitFuzz [9], BitTurner [7] FuzzBall [21], McVeto [28], SAGE [14], and S2E [29], which have been used in a variety of application domains. The main question we tackle in MAYHEM is scaling to find and demonstrate exploitable bugs. The hybrid symbolic execution technique we present in this paper is completely different from hybrid concolic testing [20], which interleaves random testing with concolic execution to achieve better code coverage. XI. Conclusion We presented MAYHEM, a tool for automatically finding exploitable bugs in binary (i.e., executable) programs in an efficient and scalable way. To this end, MAYHEM introduces a novel hybrid symbolic execution scheme that combines the benefits of existing symbolic execution techniques (both online and offline) into a single system. We also present index-based memory modeling, a technique that allows MAYHEM to discover more exploitable bugs at the binary-level. We used MAYHEM to analyze 29 applications and automatically identified and demonstrated 29 exploitable vulnerabilities. XII. Acknowledgements We thank our shepherd, Cristian Cadar and the anonymous reviewers for their helpful comments and feedback. This research was supported by a DARPA grant to CyLab at Carnegie Mellon University (N11AP20005/D11AP00262), a NSF Career grant (CNS0953751), and partial CyLab ARO support from grant DAAD19-02-1-0389 and W911NF-09-1-0273. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. References
{"Source-Url": "https://users.ece.cmu.edu/~aavgerin/papers/mayhem-oakland-12.pdf", "len_cl100k_base": 16156, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 56872, "total-output-tokens": 18010, "length": "2e13", "weborganizer": {"__label__adult": 0.00035572052001953125, "__label__art_design": 0.00031185150146484375, "__label__crime_law": 0.0005054473876953125, "__label__education_jobs": 0.0003142356872558594, "__label__entertainment": 6.836652755737305e-05, "__label__fashion_beauty": 0.00014066696166992188, "__label__finance_business": 0.0001556873321533203, "__label__food_dining": 0.00028204917907714844, "__label__games": 0.0009222030639648438, "__label__hardware": 0.0012454986572265625, "__label__health": 0.0003235340118408203, "__label__history": 0.0002562999725341797, "__label__home_hobbies": 8.922815322875977e-05, "__label__industrial": 0.0003437995910644531, "__label__literature": 0.00021660327911376953, "__label__politics": 0.0002301931381225586, "__label__religion": 0.0004000663757324219, "__label__science_tech": 0.02447509765625, "__label__social_life": 7.140636444091797e-05, "__label__software": 0.009490966796875, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.00026607513427734375, "__label__transportation": 0.0004224777221679687, "__label__travel": 0.0001809597015380859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72988, 0.04341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72988, 0.41765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72988, 0.87345]], "google_gemma-3-12b-it_contains_pii": [[0, 4907, false], [4907, 8907, null], [8907, 14043, null], [14043, 19678, null], [19678, 21070, null], [21070, 26912, null], [26912, 33075, null], [33075, 37502, null], [37502, 42929, null], [42929, 47549, null], [47549, 52911, null], [52911, 57582, null], [57582, 62531, null], [62531, 67043, null], [67043, 72988, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4907, true], [4907, 8907, null], [8907, 14043, null], [14043, 19678, null], [19678, 21070, null], [21070, 26912, null], [26912, 33075, null], [33075, 37502, null], [37502, 42929, null], [42929, 47549, null], [47549, 52911, null], [52911, 57582, null], [57582, 62531, null], [62531, 67043, null], [67043, 72988, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72988, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72988, null]], "pdf_page_numbers": [[0, 4907, 1], [4907, 8907, 2], [8907, 14043, 3], [14043, 19678, 4], [19678, 21070, 5], [21070, 26912, 6], [26912, 33075, 7], [33075, 37502, 8], [37502, 42929, 9], [42929, 47549, 10], [47549, 52911, 11], [52911, 57582, 12], [57582, 62531, 13], [62531, 67043, 14], [67043, 72988, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72988, 0.13661]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
cc8de210cd3855ddd6ae12b81de4a3537a3093eb
Fast Approximate Discovery of Inclusion Dependencies Sebastian Kruse, Thorsten Papenbrock, Christian Dullweber, Moritz Finke, Manuel Hegner, Martin Zabel, Christian Zöllner, Felix Naumann Abstract: Inclusion dependencies (INDs) are relevant to several data management tasks, such as foreign key detection and data integration, and their discovery is a core concern of data profiling. However, n-ary IND discovery is computationally expensive, so that existing algorithms often perform poorly on complex datasets. To this end, we present FAIDA, the first approximate IND discovery algorithm. FAIDA combines probabilistic and exact data structures to approximate the INDs in relational datasets. In fact, FAIDA guarantees to find all INDs and only with a low probability false positives might occur due to the approximation. This little inaccuracy comes in favor of significantly increased performance, though. In our evaluation, we show that FAIDA scales to very large datasets and outperforms the state-of-the-art algorithm by a factor of up to six in terms of runtime without reporting any false positives. This shows that FAIDA strikes a good balance between efficiency and correctness. Keywords: inclusion dependencies, data profiling, dependency, discovery, metadata, approximation 1 The Intricacies of Inclusion Dependency Discovery It is a well-known fact that ever-increasing amounts of data are being collected. To put such large and complex datasets to use, be it for machine learning, data integration, or any other application, it is crucial to know the datasets’ structure. Unfortunately, this information is oftentimes missing, incomplete, or outdated for all sorts of reasons. To overcome this quandary, the research area of data profiling has borne several algorithms to discover structural metadata of any given dataset. A very important and fundamental type of structural metadata of relational databases are inclusion dependencies (INDs) [AGN15]. They form an integral component of foreign key (FK) discovery [Ro09], allow for query optimizations [Gr98], enable integrity checking [CTF88], and serve many further data management tasks. Intuitively, an IND describes that a combination of columns from one database table only contains values of another column combination, which might or might not be in the same table. Before looking at a concrete example, let us formalize this notion. Definition 1 (Inclusion dependency) Let $r$ and $s$ be two relational, potentially equal, tables with schemata $R = (R_1, \ldots, R_k)$ and $S = (S_1, \ldots, S_m)$, respectively. Further, let $\bar{R} = R_{i_1} \ldots R_{i_n}$ and $\bar{S} = S_{j_1} \ldots S_{j_n}$ be n-ary column combinations of distinct columns. We say that $\bar{R}$ is included in $\bar{S}$, i.e., $\bar{R} \subseteq \bar{S}$, if for every tuple $t_r \in r$, there is a tuple $t_s \in s$ such 1 Hasso Plattner Institute (HPI), Prof.-Dr.-Helmert-Str. 2-3, 14482 Potsdam, firstname.lastname@hpi.de 2 Hasso Plattner Institute (HPI), Prof.-Dr.-Helmert-Str. 2-3, 14482 Potsdam, firstname.lastname@student.hpi.de that $t_s(\hat{R}) = t_s(\hat{S})$. $\hat{R}$ is called the dependent column combination and $\hat{S}$ the referenced column combination. With both of them having $n$ columns, the IND is said to be $n$-ary. Tab. 1 illustrates a lexicographical example dataset comprising a dictionary table that stores words of different languages, and a translation table that translates those words from one language to the other. Apparently, there are, amongst some others, two interesting, ternary INDs to be found in that example, namely $\text{word1, lang1, type1} \subseteq \text{word, lang, type}$ and $\text{word2, lang2, type2} \subseteq \text{word, lang, type}$. Intuitively, these INDs require that all words in the translation table are found in the dictionary table. Note in particular the stronger semantics in comparison to INDs of lower arity, e.g., $\text{word1} \subseteq \text{word}$ and $\text{word2} \subseteq \text{word}$. While the former, ternary INDs identify words not only by their literal but also their language and syntactical type, the latter, unary INDs merely consider the word literal, which does not suffice to uniquely identify a word (cf. hat). Due to these stronger semantics, it is worthwhile to discover INDs of the highest possible arity. <table> <thead> <tr> <th>word</th> <th>lang</th> <th>type</th> </tr> </thead> <tbody> <tr> <td>hut</td> <td>en</td> <td>noun</td> </tr> <tr> <td>hat</td> <td>en</td> <td>noun</td> </tr> <tr> <td>has</td> <td>en</td> <td>verb</td> </tr> <tr> <td>Hütte</td> <td>de</td> <td>noun</td> </tr> <tr> <td>Hut</td> <td>de</td> <td>noun</td> </tr> <tr> <td>hat</td> <td>de</td> <td>verb</td> </tr> </tbody> </table> (a) Dictionary table. <table> <thead> <tr> <th>word1</th> <th>lang2</th> <th>type1</th> <th>word2</th> <th>lang2</th> <th>type2</th> <th>fit</th> </tr> </thead> <tbody> <tr> <td>hut</td> <td>en</td> <td>noun</td> <td>Hütte</td> <td>de</td> <td>noun</td> <td>⊥</td> </tr> <tr> <td>hat</td> <td>en</td> <td>noun</td> <td>Hut</td> <td>de</td> <td>noun</td> <td>⊥</td> </tr> <tr> <td>has</td> <td>en</td> <td>verb</td> <td>hat</td> <td>de</td> <td>verb</td> <td>⊥</td> </tr> </tbody> </table> (b) Translation table. Tab. 1: A lexicographical example dataset with several INDs. In the last years, several IND discovery algorithms have been proposed [Pa15, DMLP09, KR03, DMP03], pushing the boundaries in terms of efficiency and scalability. However, many real-world datasets cannot be processed by any of these algorithms within reasonable time, even on powerful hardware, for two main reasons: First, the number of valid $n$-ary INDs is often enormously large in real-world datasets. The result sets alone can, therefore, already exceed main memory limits [Pa15]. Second, and more commonly, the existing algorithms need to shuffle huge amounts of data to test IND candidates – in fact, the amount of shuffled data depends on the number of IND candidates and easily exceeds the inspected dataset in size. Some algorithms perform those shuffles out-of-core to overcome main memory limitations. Still, not only does this operation remain an efficiency bottleneck, but also the shuffled data can become so large that even disk storage limitations are exceeded! We propose to tackle the latter issue by approximating the INDs of datasets, that is, for any given dataset we calculate a set of INDs that is complete but might contain false positives. However, the guarantee of correctness is traded for great performance improvements. Let us justify, why this trade is worthwhile: We observed that in real-world datasets any two column combinations are either related by an IND or their values are disjoint to a great extent. In other words, it is rare that the vast majority of values of one column combination are included in the other column combination, except for a small remainder. This clear cut allows to use more light-weight, approximate methods to test IND candidates without risking severe accuracy losses. Nonetheless, in the few cases where two columns overlap in, say, 99% of their values and an approximate method would indeed incorrectly report an IND, then this false positive is still a partial IND, i.e., it has only few violating values. We note that guaranteed and complete correctness of INDs is not required by many use cases, such as FK discovery \cite{Ro09, Zh10} and data cleaning \cite{Bo05}. To this end, we introduce $\text{FaIDA}$, the first approximate discovery algorithm for unary and $n$-ary INDs. $\text{FaIDA}$ uses two different approximate data structures: a hash-based probabilistic data structure to characterize column combinations with a very small memory footprint, and a sampling-based inverted index to attenuate statistically expected inaccuracies. The combination of these two data structures offers high-precision results, because both compensate the other’s weaknesses. In fact, we found $\text{FaIDA}$ to report exact results in all our experiments. In addition, we characterize the novel class of scrap INDs, a sort of degenerate INDs that are not applicable to typical IND use cases but usually make up a considerable share of the INDs in a dataset. $\text{FaIDA}$ identifies and prunes scrap INDs to narrow down the search space and achieve further performance improvements. This is particularly useful to prevail in situations where there are actually intractably many INDs, as mentioned above. The remainder of the paper is organized as follows: In Sect. 2, we describe related work. We proceed to give an overview of $\text{FaIDA}$ in Sect. 3, followed by a detailed description of its hybrid IND checking process in Sect. 4 and a formalization and rationale for scrap INDs in Sect. 5. Then, in Sect. 6, we compare $\text{FaIDA}$ to the exact state-of-the-art IND discovery algorithm $\text{Bind}$ and evaluate $\text{FaIDA}$’s effectiveness and efficiency in detail. Eventually, we conclude in Sect. 7. 2 Related Work The discovery of dependencies, such as functional dependencies, order dependencies, or inclusion dependencies, in a given database is considered an essential component of data profiling \cite{AGN15}. In this section, we focus on related work that addresses the approximate and exact discovery of inclusion dependencies. Approximate IND discovery. We define approximation as estimation of the set of the actual INDs in a dataset. This nomenclature is in contrast to “approximate INDs” (also: “partial INDs”) that hold only on a subset of the rows \cite{LPT02, DMLP09}. This orthogonal concern is not the focus of this paper. The only existing approach to approximate IND discovery is described by Zhang et al. as part of foreign-key (FK) discovery \cite{Zh10}. It uses bottom-k sketches and the Jaccard index to approximate the inclusion of two columns based on Jaccard coefficients. Their approach has several disadvantages: For each level of $n$-ary INDs the hashes for the bottom-k sketches have to be computed from all actual values. Furthermore, it suffers from a similar problem as the probabilistic data structure used by $\text{FaIDA}$ when comparing a column $c_1$ that has only few distinct values with a column $c_2$ that has many distinct values. The two bottom sets have potentially only a small overlap and in the worst case, all bottom hashes of $c_2$ are smaller than the bottom hashes of $c_1$. While $\text{FaIDA}$’s errors are limited to false positives, this effect can additionally lead to false negatives. Moreover, the authors focus on FK candidates and apply the approach only to relatively few candidates where the right-hand side has to be a known primary key. For these reasons, the proposed algorithm produces significantly different result sets than FAIDA, so that a performance comparison between these algorithms does not make sense. **Exact IND discovery.** In previous research, much attention has been paid to the discovery of unary INDs, i.e., INDs between single columns. Different discovery strategies have been proposed, based on inverted indices [DMLP09], sort-merge joins [Ba07], and distributed data aggregation [KPN15]. While efficient unary IND discovery is an important part of $n$-ary IND detection, the problem is not of exponential complexity and therefore a much simpler task. Of course, FAIDA can also be applied to the efficient discovery of unary INDs. Research has also devised exact algorithms for the discovery of $n$-ary INDs, in particular MIND [DMLP09] and Binder [Pa15]. Both employ an Apriori-like discovery scheme to find IND candidates. While MIND tests these candidates individually against a database, Binder employs a more efficient divide-and-conquer strategy to test complete candidate sets in a single pass over the data. This makes Binder the current state-of-the-art algorithm, which we compare against in our evaluation. Nevertheless, both strategies exhibit declined performance and increased memory consumption when testing IND candidates of high arity. In contrast, FAIDA, which also builds upon Apriori candidate generation, employs probabilistic data structures that do not suffer from this effect. Besides Apriori-based approaches, depth-first algorithms have been proposed that optimize candidate generation towards inclusion dependencies of very high arity [KR03, DMP03]. However, these algorithms employ the same expensive IND checking mechanisms as MIND and are only applicable to pairs of tables, lacking a strategy to deal efficiently with whole datasets. Another recent approach avoids candidate generation entirely [SM16]. This is achieved by determining for every pair of tuples from two given tables, which unary INDs they support. These sets of unary INDs are then successively merged into maximal $n$-ary INDs. Again, no strategy is given to efficiently profile datasets with more than two tables. **Foreign key discovery.** Although strongly connected, IND discovery and FK discovery are distinct problems: not every IND that holds on a given dataset is a FK relationship. Vice versa, in unclean databases, there might be semantically intended FK relationships whose corresponding INDs are violated by several tuples [Zh10]. However, INDs are a prerequisite for several FK discovery algorithms [Ro09, Zh10]. ## 3 Overview of FAIDA In this section, we present FAIDA, our Fast Approximate IND Discovery Algorithm, from a bird’s eye view before giving more details in the following sections. Fig. 1 depicts FAIDA’s general mode of operation. As for most $n$-ary IND discovery algorithms, FAIDA starts by identifying unary INDs, then uses an Apriori-style candidate generation process to generate binary IND candidates, and checks those in turn. This generate-and-test procedure is repeated – i.e., the latest discovered, \(n\)-ary INDs are used to generate \((n+1)\)-ary IND candidates, which are tested subsequently to retain the actual \((n+1)\)-ary INDs – until the candidate generator produces no more IND candidates for some arity \(n_{\text{max}}\). In the following, we describe the various processing steps in more detail. **Preprocessing.** The performance of IND discovery algorithms is mainly impacted by the fact that the input dataset has to be re-read and shuffled in every IND test phase. \textsc{FaIdA} attenuates this issue in a preprocessing step. At first, it converts the input dataset into \textit{hashed columns}, i.e., the values in the input dataset tables are hashed and stored in a columnar layout. During the IND test phases, \textsc{FaIdA} will then resort to those hashed columns rather than the original input dataset, thereby greatly reducing the amount of data to be read, as we explain in the next paragraph. Furthermore, \textit{hashed samples} of every table are stored – they are needed for bootstrapping the inverted index in the IND test phase. In Sect. 4.1, we explain the preprocessing step and its impact on performance and the IND result quality in greater detail. Still, we already want to remark that the use of compact hashes instead of actual values greatly improves performance of \textsc{FaIdA} in the subsequent phases, but it cannot guarantee exact results due to potential hash collisions. If two values share the same hash value, \textsc{FaIdA} will deem those two values equal. Nevertheless, this phenomenon can only cause false positive INDs but not miss out any INDs. Moreover, it is extremely unlikely that single hash collisions yield false positive INDs, because the distinction between INDs and non-INDs is usually not governed by single values only. **IND test.** To test IND candidates, \textsc{FaIdA} uses a hybrid approach that builds upon a probabilistic \textit{HyperLogLog} structure [Fl07] to represent columns with many distinct values and a sampling-based inverted index to represent columns with few distinct values. For each level, i.e., for each IND arity, \textsc{FaIdA} passes once over the relevant hashed columns, inserts them into the two data structures, and finally jointly evaluates them to determine the actual INDs. The IND tests might also produce false positives in addition to those caused by the hashing during preprocessing, but it does not produce false negatives. Because the IND tests and the hashing have consistent error characteristics, \texttt{FAIDA} is guaranteed to find all INDs in a dataset. Sect. 4.2 to 4.3 discuss our hybrid IND test strategy in detail. **Candidate generation.** \texttt{FAIDA} uses the same Apriori-style candidate generation as existing algorithms [DMLP09, Pa15]. This procedure makes use of the downward closure property of INDs: It only generates an \( n \)-ary IND candidate \( A_1 A_2 \ldots A_n \subseteq B_1 B_2 \ldots B_n \), if the \( n \) (\( n-1 \))-ary INDs \( A_2 A_3 \ldots A_n \subseteq B_2 B_3 \ldots B_n \), \( A_1 A_3 \ldots A_n \subseteq B_1 B_3 \ldots B_n \), \ldots, and \( A_1 A_2 \ldots A_{n-1} \subseteq B_1 B_2 \ldots B_{n-1} \) are verified to hold on the profiled dataset. There are multiple reasons why we did not replace the candidate generation with an approximate version. At first, we found in our experiments that the candidate generation only takes a tiny fraction of the overall runtime of IND algorithms - so the overall gains of any performance improvement here would be marginal. Secondly, if an approximate candidate generation produces false positives, we would likely end up with inferior performance, because the algorithm would need to test those additional IND candidates as well. Finally, if an approximate candidate generation yields false negatives, i.e., if it misses out on some IND candidates, \texttt{FAIDA} cannot guarantee completeness of its results anymore, which would be a bad trade. However, \texttt{FAIDA} might still prune some candidates deliberately: In Sect. 5, we describe the class of scrap INDs that oftentimes make up a great share of all INDs in a dataset but that are mostly useless. We show how to detect scrap INDs, so as to remove them from the set of IND candidates for performance improvements. 4 Fast and Lean Inclusion Dependency Approximation As stated in Sect. 3, \texttt{FAIDA} adapts the same workflow for IND discovery as most exact algorithms: First, it discovers all unary INDs and, then, iteratively generates and tests IND candidates of respectively next arity. However, \texttt{FAIDA} uses approximation techniques in this process to reduce the amount of data handled in each iteration and, ultimately, to improve performance. In the following, we explain the building blocks as well as the interplay of this approximation scheme in more detail. 4.1 Read-Optimized Input Data Whenever there is a set of IND candidates to be tested, exact IND discovery algorithms (i) read the input dataset, (ii) extract the value combinations that belong to dependent or referenced column combination of any IND candidate, and then (iii) shuffle those value combinations to compare the column combinations of the IND candidates to determine the actual INDs. By dropping the guarantee of the correctness of the discovered INDs, \texttt{FAIDA} can use a completely different, more efficient, and more scalable approach to test IND candidates. Some activities of \texttt{FAIDA}'s IND test can be factored out of the IND test loop and instead be done only once, before the first IND test, which further improves performance. We describe those in the following. **Hashing.** \texttt{FAIDA}'s IND test uses hashes of the values in the input dataset, rather than the actual values. This is obviously favorable \textit{w.r.t.} performance and scalability, because hashes are of a small, fixed size in contrast to the actual values. Thus, they consume less memory and can be efficiently compared. Moreover, Faida’s IND test uses HyperLogLog [FI07] data structures, which operate on hashes anyway. However, depending on the hash function, the hashing can be quite CPU-intensive. Of course, it is necessary to read the input data once before it can be hashed. Because re-reading the input dataset over and over again is costly in terms of disk I/O, Faida reads the input dataset only once, hashes its values, and writes the resulting hashes back to disk. Note that for testing n-ary INDS with $n \geq 2$, other IND discovery algorithms need to shuffle combinations of values. In contrast, Faida merges the individual value hashes of those value combinations into a new, single hash value using a simple bitwise XOR. Consequently, the descriptions of Faida’s other components refer, without loss of generality, only to single hash values and not to combinations of hash values. **Columnar data layout.** In most cases, relational data is organized in a row layout. When testing n-ary IND candidates with $n \geq 2$, most columns of the input dataset usually do not appear in all IND candidates. Still, in a row layout, those columns have to be read without being of any use. Faida avoids this inefficiency by storing above described hashes in a columnar layout, thereby allowing the IND test to read only those columns that are part of an IND candidate. For the example dataset from Tab. 1, Faida creates nine files, each containing the hashes of the values of one of the columns in that dataset. **Table samples.** As mentioned in Sect. 3, Faida uses a hybrid IND test strategy with HyperLogLog structures and an inverted index. The inverted index operates on a small sample of the (hashed) input data. Our algorithm calculates this sample once in the beginning and, then, reuses it in every IND test phase. In fact, we have the following requirements for the sample: Given a sample size $s$ (e.g., $s = 500$), we need a sample of each table, such that this sample table contains $\min\{s, d_A\}$ distinct values for each column $A$ with $d_A$ being the actual number of distinct values of $A$. The simple rationale for this requirement is that for columns with only few distinct values, we aim to ensure that these are effectively processed in the inverted index. If we took a random sample instead, we would most likely capture only a subset of its actual values leading to an impaired performance of the inverted index. To generate this sample, we use a simple greedy proceeding that is depicted in Algorithm 1. The sampling algorithm is applied to each table individually and can be piggy-backed onto the above described preprocessing steps. Note that it operates on the hashed values and, thus, benefits from the low memory footprint of its data structures. The algorithm starts by initializing two data structures, namely $T_s$, which collects the sample tuples, and $sampledValues$, which tracks for each of the columns in the table the values that have been sampled from it so far (Lines 1–2). Then, it iterates all the tuples of the table (Line 3) to decide for each tuple if it should be included in the sample. A tuple should be included if a column exists that does not yet have $s$ different sampled values and the tuple provides a yet unseen value for that column (Lines 4–7). If so, the tuple is added to $T_s$ and the samples in $sampledValues$ are updated accordingly. As an example, consider Tab. 1a and assume $s = 2$. In that case, Algorithm 1 would sample the first four tuples: The first tuple is always sampled anyway; the second tuple provides a new value for word; the third for type; and the fourth for lang. Afterwards, the algorithm has sampled at least two values for each column, so no further tuple will be picked. Algorithm 1: Create a hashed table sample Input: hashed tuples $T$ for a table with attributes $A_1 \ldots A_n$ minimum values $s$ to be sampled per column Output: sample $T_s$ of the hashed tuples 1. $T_s \leftarrow \emptyset$; $\text{sampledValues} \leftarrow \text{arrayOfSize}(n)$; 2. foreach $1 \leq i \leq n$ do $\text{sampledValues}[i] \leftarrow \emptyset$; 3. foreach $t \in T$ do 4. $\text{addToSample} \leftarrow \text{false}$; 5. foreach $1 \leq i \leq n$ do if $|\text{sampledValues}[i]| \leq s \wedge t[A_i] \notin \text{sampledValues}[i]$ then $\text{addToSample} \leftarrow \text{true}$; 6. if $\text{addToSample}$ then $T_s \leftarrow T_s \cup \{t\}$; foreach $1 \leq i \leq n$ do $\text{sampledValues}[i] \leftarrow \text{sampledValues} \cup \{t[A_i]\}$; 4.2 Scalable Probabilistic Inclusion Dependency Test As stated in the previous sections, the major bottleneck of exact IND discovery algorithms is the IND candidate testing, which requires shuffling large amounts of data, especially when many IND candidates of higher arities arise. Not only are there more value combinations of larger individual size with increasing arity, but the shuffling itself also becomes more expensive. That is because the shuffling can eliminate duplicate values. However, the likelihood of duplicate value combinations drastically declines with the arity. Using hashes instead of values, as described in Sect. 4.1, can only mitigate but not completely avoid this problem and it might also eventually succumb to memory limitations. Faida avoids the shuffling completely and uses a probabilistic approach for IND testing. The main idea is to calculate a summary for each column combination in the IND candidates and then perform a heuristic IND test on those summaries. An obvious instance of this idea is to encode column combinations with Bloom filters and then check for each IND candidate if its referenced column combination Bloom filter has all bits of the dependent column combination Bloom filter set. However, Bloom filters are prone to oversaturation for very large columns. Therefore, we use set cardinalities: Let $X \subseteq Y$ be an IND candidate and $s$ a function that maps a column combination to the set of all its contained value combinations. Then $X \subseteq Y$ is an IND, if and only if $|s(Y)| = |s(X) \cup s(Y)|$. The set cardinality of a multiset can be efficiently and effectively estimated with HyperLogLog [Fl07], which scales to very large cardinalities with arbitrary precision. Before we describe how Faida employs HyperLogLog for IND tests, let us briefly explain how that counting scheme works. At the core, it makes use of the following observation: Let $s$ be a sample from a uniform distribution of the values from 0 to $2^k - 1$ for some $k$ and let $n$ be the number of leading zeroes in the binary representation of $\min s$ using $k$ digits. Then $|s|$ can be estimated as $2^n$. Assuming a good hash function that produces uniformly distributed, mostly collision-free hash values for its input data, we can count any values via their hashes. HyperLogLog extends this idea by partitioning the hashes by their prefixes into buckets and maintain for each bucket the largest number of leading zeroes observed in the residual suffix bits. The observations in the different buckets are, then, merged via harmonic mean with additional bias correction [Fl07]. Consider Fig. 2 as an example, where we use HyperLogLog to estimate the set cardinality of the word column from Tab. 1. We employ a 4-bit hash function using the first bit for partitioning and the residual three bits to count leading zeroes. Apparently, for the bucket for the prefix $0$, hash$(\text{has}) = 0011$ provides the most leading zeroes in the suffix (namely one), while for the bucket with the prefix $1$, hash$(\text{hat}) = 1000$ provides the most leading zeroes, namely three. Applying the harmonic mean and bias correction, HyperLogLog estimates four as the set cardinality of the input values. Note that HyperLogLog is, due to its stochastic nature, rather suited to estimate the set cardinality of larger datasets. ![Fig. 2: Example HyperLogLog structure with two buckets.](image) Given this intuition of HyperLogLog, we proceed to show how we use this data structure for IND testing. As mentioned above, the basic idea is that for an IND candidate $X \subseteq Y$ to hold, the set cardinality of $Y$ must be equal to the joint set cardinality of $X$ and $Y$. A naïve implementation of this idea would use two HyperLogLog structures to estimate and compare both cardinalities, say $H_{LY}$ and $H_{LXY}$. This approach requires only very little processing effort in contrast to the shuffling of exact IND discovery algorithms: It is only necessary to scan once through the data and update the HyperLogLog structures, which themselves are of constant size. However, this is expected to work well only when (i) $Y$ contains a stochastically relevant amount of elements and (ii) the differences of the count estimates of $H_{LY}$ and $H_{LXY}$ is greater than the expected estimation error of $H_{LY}$. The estimation error can be controlled via the number of buckets in the HyperLogLog structures. With our observation that the distinction of INDs and non-INDs is not governed by only few values, we can assume the second criterion to hold if we keep the expected estimation error small, e.g., around $0.1\%$. With that theoretical understanding on the applicability of HyperLogLog, we can now tailor it a bit more towards IND tests: The only case, where $H_{LXY}$ would yield an estimate greater than that of $H_{LY}$, applies when there is some element in $X$ that is not $Y$ and that provides more leading zeroes to any partition in $H_{LXY}$ than any element from $Y$ does. Thus, we can instead maintain the two HyperLogLog structures $H_{LX}$ and $H_{LY}$ and check if $H_{LX}$ has observed more leading zeroes than $H_{LY}$ in any of the buckets. While this test is logically equivalent to the above naïve approach, it requires Faida to maintain only one HyperLogLog structure per column combination rather than up to two HyperLogLog structures per IND, which is theoretically bounded only by the square of column combinations. Tab. 2 exemplifies HyperLogLog structures with two buckets applied to the example dataset from Tab. 1. Note that in practical scenarios, more buckets should be used to enhance HyperLogLog’s accuracy. Given that all column pairs are IND candidates, the above described IND test identifies all actual INDs correctly. For instance, word1 \( \subseteq \) word is correctly deemed to be an IND, because the HyperLogLog bucket values of word2, i.e., 1 and 3, are less than or equal to the respective bucket values of word, which are also 1 and 3. This result completeness is guaranteed, because if \( X \subseteq Y \) is actually an IND, then all values of \( X \) must be considered in the HyperLogLog structure of \( Y \). Correctness of the result cannot be guaranteed, though. For instance, word \( \subseteq \) word1 is deemed to be an IND judging from the HyperLogLog structures, although it is actually not. As mentioned above, HyperLogLog is not well-suited to compare columns with only few distinct values. Therefore, we complement it with a second IND test as presented in the next section. <table> <thead> <tr> <th>Prefix</th> <th>word</th> <th>lang</th> <th>type</th> <th>word1</th> <th>lang1</th> <th>type1</th> <th>word2</th> <th>lang2</th> <th>type2</th> <th>fit</th> </tr> </thead> <tbody> <tr> <td>( \emptyset )</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>3</td> <td>1</td> <td>2</td> <td>3</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> Tab. 2: HyperLogLog structures for single columns of the example dataset. 4.3 Hybrid Inclusion Dependency Test HyperLogLog is a stochastic counting approximation that works particularly well for IND tests where both column combinations have many distinct values. To fill its blind spot – IND candidates containing a column combination with only few values – the IND tests additionally use an inverted index. The IND testing with inverted indexes has first been introduced by De Marchi et al. [DMLP09]. The basic idea is to build an inverted index of the input dataset that maps each value to the columns it appears in. For instance, for our example dataset in Tab. 1 such an inverted index maps the value en to the set of columns \{lang, lang1, lang2\}. Now to find all columns that include a certain column \( X \), it suffices to select all column sets that contain \( X \) and intersect them. Applying this procedure for every column \( X \) yields all the INDs in the dataset. To avoid scaling problems, FAIDA cannot create such an inverted index on the entire dataset. However, it is possible to create such an inverted index for a subset of the values (or rather a subset of the hashes as described in Sect. 4.1) in the dataset and apply the said IND test to it. This idea seems promising, because we can control the sample size and yet focus the sample in such a manner that it comprises especially the values of columns with only few distinct values (cf. Sect. 4.1), i.e., those cases where HyperLogLog is not so well-suited. Moreover, this approach still preserves result completeness: If \( X \subseteq Y \) is an IND, then \( \sigma(X) \subseteq \sigma(Y) \) has to hold, too, where \( \sigma \) selects only those values that are in the sample. Algorithm 2 implements this idea. It starts by taking the sample tuples for each table (cf. Sect. 4.1). Then, it creates an inverted index for all column combinations that appear in any of the IND candidates (Lines 1–7). Next, Algorithm 2 builds a HyperLogLog structure for each column combinations. Afterwards, the algorithm needs to initialize a flag in isCovered to keep track of whether all values for a certain column combination are actually found in the sample and, thus, in the inverted index (Lines 8–12). Having initialized all relevant data structures, the algorithm iterates all values of all column combinations (Line 13). If a value is included in the table samples and, thus, a key of the inverted index, the corresponding index entry is updated with the column combination of that value (Lines 14–16); otherwise, the algorithm updates the corresponding HyperLogLog structure with that value (Line 17). In the latter case, the algorithm also notes that the respective column combination is not completely covered by the inverted index (Line 19). Whether or not a column combination is covered becomes relevant in the subsequent phase where the IND candidates are actually tested. In the beginning of that phase, only those IND candidates are retained that hold on the inverted index (Line 20). Now, if the dependent ``` \textbf{Algorithm 2: Hybrid IND test} \begin{algorithm} \SetAlgoLined \textbf{Input}: set of IND candidates $I_c$ \hspace{1em} samples of hashed tuples for each table $T_s$ \hspace{1em} hashes for all value combinations $V$ \textbf{Output}: verified INDs $I$ \begin{algorithmic}[1] \State $invertedIndex \leftarrow \text{mapping}\left(\text{defaultValue} = \emptyset\right)$; \For{$T_s \in T_s$} \State $C \leftarrow \text{relevantColumnCombinations}(T_s, I)$; \For{$t \in T$} \For{$c \in C$} \State $v \leftarrow t[c]$; \State $invertedIndex[v] \leftarrow invertedIndex[v] \cup \{c\}$ \EndFor \EndFor \EndFor \State $isCovered \leftarrow \text{mapping}()$; \State $hlls \leftarrow \text{mapping}()$; \For{$c \in \text{allColumnCombinations}(I_c)$} \State $isCovered[c] \leftarrow \text{true}$; \State $hlls[c] \leftarrow \text{hyperLogLog}()$; \EndFor \For{$v \in V$} \State $c \leftarrow \text{columnCombination}(v)$; \State $C \leftarrow invertedIndex[v]$; \If{$C \neq \emptyset$} $invertedIndex[v] \leftarrow C \cup \{c\}$; \Else \State $\text{insert}(v \text{ into } hlls[c])$; \State $isCovered[v] \leftarrow \text{false}$; \EndIf \EndFor \State $I' \leftarrow \text{testAll}(I_c \text{ on } invertedIndex)$; \For{$(X \subseteq Y) \in I'$} \If{$isCovered[X] \vee \neg isCovered[Y] \land \text{test}(\langle X \subseteq Y \rangle \text{ on } hll[X] \text{ and } hll[Y])$} \State $I \leftarrow I \cup \{\langle X \subseteq Y \rangle\}$; \EndIf \EndFor \end{algorithmic} \end{algorithm} ``` column combination $X$ of any retained IND candidate $X \subseteq Y$ is covered by the inverted index, we can directly promote it as a valid IND (Lines 21–23); otherwise, if neither $X$ nor $Y$ is covered, we additionally perform the HyperLogLog-based IND test to verify the candidate. Note that, if $Y$ is covered but $X$ is not, then there must be some value in $X$ that violates $X \subseteq Y$. Thus, we do not add it to the set of actual INDs $I$ in that case. In summary, the presented IND test follows a hybrid strategy using HyperLogLog and an inverted index. The sampling-based inverted index reliably discerns INDs between column combinations with only few distinct values. If, in contrast, IND candidates between column combinations with many distinct values need to be tested, it automatically switches to the stochastic HyperLogLog-based test that scales well because of its constant memory footprint regardless of the size of the input data. Still, the inverted index reinforces the test as a “control sample”. 5 Scrap Inclusion Dependencies Exact IND discovery algorithms, and also FAIDA, deliberately exclude some uninteresting INDs in their result sets, even though the respective dataset actually satisfies them. Those INDs have certain syntactical properties: First, there are trivial INDs $X \subseteq X$ with equal dependent and referenced column combinations, which always hold. Second, discovery algorithms respect permutability of INDs, i.e., if $AB \subseteq CD$ is a valid IND, then $BA \subseteq DC$ must also hold. Thus, it is sufficient to check (and report) only a single IND candidate from such permutation classes. Omitting these two kinds of INDs reduces both the amount of resources needed during the discovery process and the size of the output that usually needs to undergo further, often manual, processing. On the face of those benefits, we propose to extend the criteria for omissible INDs from syntactical properties to instance-based properties, i.e., properties of the data comprised in the columns of an INDs. Specifically, we argue that columns that contain only NULL values (which we call NULL columns) and columns that contain only a single distinct value (which we call constant columns) are only contained in INDs that are degenerate and not actually useful for typical IND use-cases, such as those described in Sect. 1. Thus, it is fair to omit those scrap INDs – and it is also significant, because in our experiments we found scrap INDs to appear quite frequently. NULL columns. There are several ways to interpret NULL values, e.g., using possible-world semantics or simply treating it as another domain value [Kö16]. Another approach is to treat NULLs as “no value”, which conforms to the semantics of foreign keys in SQL. Under that interpretation, a column with only NULLs basically contains no values at all; its value set is the empty set. Because the empty set is a subset of all other sets, for a NULL column $A$ and any other column $B$, $A \subseteq B$ is a valid IND. However, not describing an actual inclusion of values, this IND is unlikely useful. Furthermore, any other IND $X \subseteq Y$ can be extended to $XA \subseteq YB$, where $A$ and $X$, as well as $B$ and $Y$ lie in the same respective tables. Again, this extension is not useful, because it does not refine $X \subseteq Y$, i.e., $AX$ does not discern tuples beyond $X$. NULL columns are a common phenomenon. They can occur when schemata provide overly detailed column sets or simply when the data for a column cannot be ascertained. Therefore, FAIDA detects NULL columns during the preprocessing (cf. Sect. 4.1), removes them from candidate generation, and reports them in the end. In this way, no INDs involving NULL columns will be discovered and the user is informed why. **Constant columns.** We call a column constant if it stores the same non-null value for every tuple. During the analysis of several real-world datasets, we found that in all cases of such constant columns, the value in question (e.g., "1" or an empty string) either is a surrogate for a NULL value or a default value (in the sense of SQL’s DEFAULT keyword). Arguably, INDs containing constant columns are omissible: If the constant is a NULL surrogate, then the same rationale as for NULL columns applies; in any other case, constant columns still do not provide much value, because they do not discern the tuples of their table. Such INDs with constant columns can bloat the IND search and result space. In particular, two constant columns \(A\) and \(B\) with the same value can be added to any IND \(X \subseteq Y\) and form the valid IND \(XA \subseteq YB\) where \(A\) and \(X\) as well as \(B\) and \(Y\) are from the same table. Thus, FAIDA also detects constant columns in order to report and remove them. By excluding the two described kinds of scrap INDs, FAIDA often gains significant performance improvements and, at the same time, enhances the quality of the discovered INDs. Note that the removal of scrap INDs is an additional, optional improvement of FAIDA and not a necessity to run the algorithm. ### 6 Evaluation In our evaluation, we demonstrate both the efficiency and effectiveness of FAIDA. Regarding efficiency, we want to answer two main questions: (i) **How does FAIDA compare to an exact state-of-the-art IND discovery algorithm, namely Binder?** (ii) **How well does FAIDA scale to large datasets?** To investigate the effectiveness, we address the following questions: (iii) **How good is FAIDA’s result quality and to what extent is it influenced by its parameterization?** (iv) **What are the effects of omitting the scrap INDs?** We first briefly describe our experimental setup and then answer these questions in various experiments. #### 6.1 Experimental setup **Hardware.** All experiments were run on a machine with an Intel Core i5-4690 CPU with 1600 MHz, 8 GB of main memory, and a Seagate Barracuda ST3000DM001 3 TB hard disk. We used Ubuntu 14 and the Oracle JRE 1.8u45 with a maximum 6 GB heap size. **Datasets.** The datasets used for evaluation are all publicly available. Some details about those datasets are listed on the left-hand side of Tab. 3. Further information and links for all datasets, as well as an implementation of FAIDA, can be found at https://hpi.de/naumann/projects/repeatability/data-profiling/metanome-ind-algorithms.html. Parameterization. FAIDA is configured via two parameters: The sampling-based inverted index requests the number of values to sample from each column, and the HyperLogLog structures require a desired accuracy of their count estimates, which effectively designates their number of buckets. While FAIDA is guaranteed to find all INDs, it potentially reports incorrect INDs due to its approximative nature. Thus, the configuration of the two parameters impacts FAIDA’s output quality: larger samples and more HyperLogLog counters reveal violations in IND candidates more accurately. In our experiments, we set the sampling parameter to a default of 500 and the HyperLogLog accuracy to a default of 0.1%, which roughly allocates 640 KiB of main memory for 1,000,000 buckets per HyperLogLog structure. Sect. 6.4 investigates FAIDA’s sensitivity w.r.t. this parameterization and shows that our defaults are a rather conservative and robust choice that incur no or only very few false positive INDs. Thus, our defaults are a reasonable choice for the following comparison with Binder. 6.2 Comparison of FAIDA and Binder The premise of approximate IND discovery is that a little loss in result quality can be traded for large performance improvements. Even though FAIDA always discovered exactly the correct INDs in our experiments, it relinquishes correctness guarantees, and, in turn, it should be more efficient than exact IND discovery algorithms. To verify this, we compare FAIDA’s runtimes on various datasets with those of the state-of-the-art algorithm for exact IND discovery, Binder [Pa15]. Note that FAIDA prunes scrap INDs, as introduced in Sect. 5. This novel pruning technique is not restricted to approximate IND discovery and can be applied to other IND discovery algorithms as well. To allow a fair comparison, we modified Binder to also prune scrap INDs and provide a separate evaluation of the scrap IND pruning in Sect. 6.5. In addition, we considered simple approximate IND discovery baselines. To determine the impact of using hashes rather than actual value combinations, we modified Binder to hash long value combinations and operate on those hashes then. This modification always was approximately 20% slower, because the additional hashing costs could not be redeemed. Also, we considered FAIDA without its inverted index, thereby detecting INDs solely using HyperLogLog. However, while the performance overhead of the inverted index is small, leaving it out often causes false positive INDs. Those yield unnecessary IND candidates, so that eventually performance declines (see Sect. 6.4). With these modifications being inferior, we focus only on FAIDA and Binder in the following. Tab. 3 shows the results and runtimes of both algorithms to discover the INDs in various datasets. FAIDA outperforms Binder consistently by a factor of five to six. Both algorithms generated and tested exactly the same IND candidates, which means that FAIDA’s data preprocessing and approximate IND tests are more efficient than Binder’s exact, hash partition-based IND test. The reason for this improvement is two-fold. At first, FAIDA tests INDs using compact hashes rather than the actual values from the datasets, which allows for more efficient comparisons and reduces memory requirements. For instance, the TESMA dataset contains a lot of long string values. Although, this dataset contains only unary INDs, FAIDA’s hashing approach can drastically reduce the computation load and easily redeems its data preprocessing overhead. In addition to that, value combinations of n-ary IND candidates often become quite long. Again, FAIDA represents those by a single hash value. The second reason for the performance improvement is found in the fact that FAIDA summarizes large datasets with small \textsc{HyperLogLog} structures and does not need to do any out-of-core execution. BINDER, in contrast, needs to spill data to disk when processing large datasets. The I/O effort can slow it down severely – in particular for long values and value combinations, respectively. <table> <thead> <tr> <th>Dataset</th> <th>Size</th> <th>Non-constant columns</th> <th>Non-scrapping n-ary INDs</th> <th>Max. arity</th> <th>Runtime</th> </tr> </thead> <tbody> <tr> <td>CENSUS</td> <td>117 MB</td> <td>48</td> <td>222</td> <td>6</td> <td>39 sec</td> </tr> <tr> <td>WIKIRANK</td> <td>730 MB</td> <td>25</td> <td>118</td> <td>6</td> <td>2 min 44 sec</td> </tr> <tr> <td>TESMA</td> <td>1.2 GB</td> <td>114</td> <td>2</td> <td>1</td> <td>1 min 36 sec</td> </tr> <tr> <td>TCP-H 70</td> <td>79.4 GB</td> <td>60</td> <td>111</td> <td>3</td> <td>9 h 32 min</td> </tr> </tbody> </table> Tab. 3: Comparative evaluation for n-ary IND detection. 6.3 Scalability On the face of ever-growing datasets, scalability is an important property of IND discovery algorithms. In particular, we investigate two scalability dimensions, namely the number of rows and the numbers of columns in a dataset, and compare FAIDA with BINDER along these dimensions. Row Scalability. To analyze the row scalability of FAIDA, it makes sense to reduce the impact of other factors affecting its runtime, such as value distributions and the number of INDs among the test datasets. To keep those other impact factors steady across datasets of different size, we use the TPC-H dataset generator to create datasets with varying numbers of rows but with the same schema and the same foreign keys. Nevertheless, we observed a few more, likely spurious, INDs, as the randomly generated data volume increases. Because their number is very small, they hardly affect runtime, though: TCP-H 1 has 104 INDs while TCP-H 100 has 113. Fig. 3 displays the results of the row scalability experiment for FAIDA and BINDER. While both algorithms exhibit a linear scaling behavior, FAIDA is always around five times faster than BINDER. In other words, the larger the dataset is, the greater are the absolute time savings of FAIDA compared to BINDER. Column Scalability. To test the runtime behavior with regard to the number of columns, we used a subset of the PDB dataset, namely 15 tables with at least 20 columns each. Then, we executed FAIDA and BINDER 20 times on those tables, thereby only taking into account the first $k$ columns for each $1 \leq k \leq 20$. Incrementing the number of considered columns in each table, rather than just incrementing the number of considered tables, mitigates the runtime impact of varying numbers of tuples in the tables and yields a smooth increase of the processed data volume. Fig. 4 shows the runtime of FAIDA and BINDER together with the number and distribution of discovered INDs. Apparently, both algorithms scale somewhat linearly w.r.t. the number of INDs. Nevertheless, FAIDA scales a lot better in the presence of $n$-ary INDs, where its approximation schemes take particular effect: At first, FAIDA resorts to its hashed column store to test IND candidates, while BINDER has to re-read the complete input dataset multiple times to test IND candidates of different arities. Second, FAIDA works exclusively on compact hashes; BINDER in contrast concatenates values and shuffles the larger value combinations to test $n$-ary IND candidates. Finally, FAIDA’s HyperLogLog structures keep its memory footprint relatively small, while BINDER at some point needs to spill the mentioned value combinations to disk in order to shuffle them. This spilling causes BINDER’s drastic runtime increase for more than 250 columns. All of the above experiments demonstrate that FAIDA trades result correctness for considerable performance gains. 6.4 Result Correctness FAIDA uses two approximate data structures to test IND candidates: a sampling-based inverted index and HyperLogLog structures, both of which can trade main memory requirements for accuracy. Hence, it is important to size them in a way that lets FAIDA yield accurate results without straining main memory too much. To explore this trade-off, we executed FAIDA with different HyperLogLog accuracies (see Sect. 4.2) and column sample sizes (see Sect. 4.3), thereby measuring the false-positive rate, i.e., the ratio of incorrectly reported INDs. Tab. 4 displays the maximum false-positive rate of FAIDA across the seven datasets COMA, CENSUS, BIOSQLSP, WIKIRANK, CATH, TESMA, and TPC-H\(^3\), and reveals two interesting insights. First, it is clearly visible that the sampling-based inverted index and the HyperLogLog structures complement one another. In particular, the HyperLogLog structures alone did not always yield exact results and the inverted index has to be quite large to achieve full correctness. However, in combination, the two data structures exhibit superior performance. As a second insight, it becomes apparent that a reasonably sized inverted index and HyperLogLog structures can robustly provide exact IND results. As a matter of fact, the column sample size of 500 and the HyperLogLog accuracy of 0.1% that we used in our efficiency experiments turn out to be a rather conservative choice. FAIDA is quite robust with respect to parameter settings. While these results, of course, do not imply that it will discover exactly the correct INDs on any given dataset, they do indicate a high confidence in FAIDA’s results. <table> <thead> <tr> <th>Sample size</th> <th>HLL accuracy 10%</th> <th>HLL accuracy 1%</th> <th>HLL accuracy 0.1%</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>6.000</td> <td>0.082</td> <td>0.024</td> </tr> <tr> <td>10</td> <td>0.243</td> <td>0.047</td> <td>0.012</td> </tr> <tr> <td>100</td> <td>0.094</td> <td><strong>0.000</strong></td> <td><strong>0.000</strong></td> </tr> <tr> <td>1,000</td> <td>0.036</td> <td>0.000</td> <td>0.0000</td> </tr> <tr> <td>10,000</td> <td><strong>0.000</strong></td> <td><strong>0.000</strong></td> <td><strong>0.0000</strong></td> </tr> </tbody> </table> Tab. 4: Maximum false positive rate of FAIDA over various datasets under different parameterizations. Having shown that the combination of a sampled inverted index and HyperLogLog yields high precision, it is intriguing to investigate how FAIDA behaves when HyperLogLog is replaced with other data summarization techniques. For this purpose, we repeated the above experiment with bottom-k sketches, as proposed in [Zh10], and with Bloom filters. To make these techniques comparable to HyperLogLog, we configured the size of the Bloom filter and the number hashes in the bottom-k sketch, respectively, such that they consume as much main memory as HyperLogLog for the various accuracy settings from Tab. 4. We found that bottom-k sketches are not a good choice: Although still yielding good results, bottom-k sketches performed at most as well as (but often worse than) HyperLogLog and Bloom. --- 3 See https://hpi.de/naumann/projects/repeatability/data-profiling/metanome-ind-algorithms.html for downloads and details of these datasets. filters under all parameterizations. This is because bottom-k sketches do not partition the hash space, which would allow a pairwise comparison of their hash values, as is the case for bits in a Bloom filter or buckets in a HyperLogLog structure. However, we also found that Bloom filters performed similarly well as HyperLogLog and are an eligible replacement. 6.5 Omitting scrap INDs Scrap INDs are those INDs that involve either NULL columns (columns containing no values other than NULL) and/or constant columns (columns containing only a single value). In Sect. 5 we argue that such INDs are not meaningful for the typical IND-based applications and ignoring them could save much computation. It remains to show that the class of scrap INDs is common and its dedicated treatment worthwhile. To this end, we analyzed the different types of INDs in various datasets. The results are displayed in Fig. 5. Approximately two thirds of all INDs in this experiment are scrap INDs. While the majority of scrap INDs involve NULL columns, we observe that datasets can also comprise many scrap INDs related to constant columns, such as ENSEMBL. Furthermore, we measured the runtime of Faida with and without pruning of scrap INDs and found it to be beneficial. While for three out of the seven datasets, performance was not affected, for the other four datasets, the pruning indeed yielded a performance improvement. On the EMDE dataset, particularly, we observed a speed-up of factor 20. In consequence, it seems appropriate to detect and prune scrap INDs already during the IND discovery process. ![Fig. 5: Break-down of IND types for various datasets.](image) 7 Conclusion We presented Faida, an approximate algorithm for the n-ary IND discovery problem. Faida uses a symbiotic combination of data preprocessing, hashes, a sampling-based inverted index, and HyperLogLog to test INDs in a highly efficient and scalable manner. In our experiments, we found our algorithm to be as much as six times faster than the exact state-of-the-art IND discovery algorithm Binder. Besides performance aspects, Faida further guarantees result completeness, i.e., it will find all INDs in a given dataset. Although incorrect INDs might be reported, FAIDA did not yield any false positives in our experiments. This shows the effectiveness of our hybrid IND test. A promising direction for future research is to adapt FAIDA for incremental IND discovery: With the low memory footprint of its data structures, FAIDA might be a particularly good fit to maintain a set of INDs on evolving, dynamic datasets. However, especially updating those data structures on value deletions or changes is a challenging task. Acknowledgements. This research was partially funded by the German Research Society (DFG grant no. FOR 1306). References
{"Source-Url": "https://hpi.de/naumann/publications/all-publications.html?cHash=d1b0f564b8fe40619c9eb541fcd22382&tx_extbibsonomycsl_publicationlist%5Bcontroller%5D=Document&tx_extbibsonomycsl_publicationlist%5BfileName%5D=paper14+final.pdf&tx_extbibsonomycsl_publicationlist%5BintraHash%5D=86f38347a4a3ccfb97b6f50e837a77df&tx_extbibsonomycsl_publicationlist%5BuserName%5D=import_isg", "len_cl100k_base": 13169, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 64480, "total-output-tokens": 14764, "length": "2e13", "weborganizer": {"__label__adult": 0.0004246234893798828, "__label__art_design": 0.0004563331604003906, "__label__crime_law": 0.0006022453308105469, "__label__education_jobs": 0.0015935897827148438, "__label__entertainment": 0.00013697147369384766, "__label__fashion_beauty": 0.000255584716796875, "__label__finance_business": 0.0006022453308105469, "__label__food_dining": 0.0004725456237792969, "__label__games": 0.0009136199951171876, "__label__hardware": 0.0012083053588867188, "__label__health": 0.0008788108825683594, "__label__history": 0.0004477500915527344, "__label__home_hobbies": 0.00017201900482177734, "__label__industrial": 0.0007600784301757812, "__label__literature": 0.0006346702575683594, "__label__politics": 0.0003933906555175781, "__label__religion": 0.0006279945373535156, "__label__science_tech": 0.328369140625, "__label__social_life": 0.00019502639770507812, "__label__software": 0.032073974609375, "__label__software_dev": 0.6279296875, "__label__sports_fitness": 0.0002486705780029297, "__label__transportation": 0.0005469322204589844, "__label__travel": 0.00022971630096435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58680, 0.0257]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58680, 0.50007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58680, 0.86652]], "google_gemma-3-12b-it_contains_pii": [[0, 3093, false], [3093, 6638, null], [6638, 10118, null], [10118, 13402, null], [13402, 15895, null], [15895, 19284, null], [19284, 23159, null], [23159, 26165, null], [26165, 29331, null], [29331, 32753, null], [32753, 35404, null], [35404, 38862, null], [38862, 41810, null], [41810, 45077, null], [45077, 48144, null], [48144, 49413, null], [49413, 52610, null], [52610, 54803, null], [54803, 57659, null], [57659, 58680, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3093, true], [3093, 6638, null], [6638, 10118, null], [10118, 13402, null], [13402, 15895, null], [15895, 19284, null], [19284, 23159, null], [23159, 26165, null], [26165, 29331, null], [29331, 32753, null], [32753, 35404, null], [35404, 38862, null], [38862, 41810, null], [41810, 45077, null], [45077, 48144, null], [48144, 49413, null], [49413, 52610, null], [52610, 54803, null], [54803, 57659, null], [57659, 58680, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58680, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58680, null]], "pdf_page_numbers": [[0, 3093, 1], [3093, 6638, 2], [6638, 10118, 3], [10118, 13402, 4], [13402, 15895, 5], [15895, 19284, 6], [19284, 23159, 7], [23159, 26165, 8], [26165, 29331, 9], [29331, 32753, 10], [32753, 35404, 11], [35404, 38862, 12], [38862, 41810, 13], [41810, 45077, 14], [45077, 48144, 15], [48144, 49413, 16], [49413, 52610, 17], [52610, 54803, 18], [54803, 57659, 19], [57659, 58680, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58680, 0.14354]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
081fd7822a2456aa5fb6341da182a4dfc0dde3d0
[REMOVED]
{"Source-Url": "https://inria.hal.science/hal-00769656/file/FASE.pdf", "len_cl100k_base": 12572, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 66698, "total-output-tokens": 14227, "length": "2e13", "weborganizer": {"__label__adult": 0.00037598609924316406, "__label__art_design": 0.0005741119384765625, "__label__crime_law": 0.0004017353057861328, "__label__education_jobs": 0.0017108917236328125, "__label__entertainment": 0.00013136863708496094, "__label__fashion_beauty": 0.0002157688140869141, "__label__finance_business": 0.0006771087646484375, "__label__food_dining": 0.00045943260192871094, "__label__games": 0.0010013580322265625, "__label__hardware": 0.0015726089477539062, "__label__health": 0.00077056884765625, "__label__history": 0.0005130767822265625, "__label__home_hobbies": 0.00019228458404541016, "__label__industrial": 0.0008702278137207031, "__label__literature": 0.0005788803100585938, "__label__politics": 0.0003352165222167969, "__label__religion": 0.0006589889526367188, "__label__science_tech": 0.260009765625, "__label__social_life": 0.00014710426330566406, "__label__software": 0.01256561279296875, "__label__software_dev": 0.71484375, "__label__sports_fitness": 0.0003581047058105469, "__label__transportation": 0.0009169578552246094, "__label__travel": 0.0002644062042236328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50349, 0.02698]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50349, 0.49899]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50349, 0.86892]], "google_gemma-3-12b-it_contains_pii": [[0, 917, false], [917, 3453, null], [3453, 6691, null], [6691, 9283, null], [9283, 12844, null], [12844, 16318, null], [16318, 19143, null], [19143, 22673, null], [22673, 25688, null], [25688, 29080, null], [29080, 32598, null], [32598, 35162, null], [35162, 38794, null], [38794, 39986, null], [39986, 42207, null], [42207, 44735, null], [44735, 47560, null], [47560, 50349, null]], "google_gemma-3-12b-it_is_public_document": [[0, 917, true], [917, 3453, null], [3453, 6691, null], [6691, 9283, null], [9283, 12844, null], [12844, 16318, null], [16318, 19143, null], [19143, 22673, null], [22673, 25688, null], [25688, 29080, null], [29080, 32598, null], [32598, 35162, null], [35162, 38794, null], [38794, 39986, null], [39986, 42207, null], [42207, 44735, null], [44735, 47560, null], [47560, 50349, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50349, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50349, null]], "pdf_page_numbers": [[0, 917, 1], [917, 3453, 2], [3453, 6691, 3], [6691, 9283, 4], [9283, 12844, 5], [12844, 16318, 6], [16318, 19143, 7], [19143, 22673, 8], [22673, 25688, 9], [25688, 29080, 10], [29080, 32598, 11], [32598, 35162, 12], [35162, 38794, 13], [38794, 39986, 14], [39986, 42207, 15], [42207, 44735, 16], [44735, 47560, 17], [47560, 50349, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50349, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
53058fcf34dfbf31b9725dd8bb7a6d92c41e46c6
[REMOVED]
{"Source-Url": "http://www.andrew.cmu.edu/user/liminjia/research/papers/sandlog-forte2014.pdf", "len_cl100k_base": 13607, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 62877, "total-output-tokens": 15702, "length": "2e13", "weborganizer": {"__label__adult": 0.00047969818115234375, "__label__art_design": 0.0002880096435546875, "__label__crime_law": 0.0009226799011230468, "__label__education_jobs": 0.0007114410400390625, "__label__entertainment": 0.00010973215103149414, "__label__fashion_beauty": 0.0001888275146484375, "__label__finance_business": 0.00041365623474121094, "__label__food_dining": 0.0004498958587646485, "__label__games": 0.0014171600341796875, "__label__hardware": 0.0028076171875, "__label__health": 0.0007882118225097656, "__label__history": 0.00037384033203125, "__label__home_hobbies": 0.00012958049774169922, "__label__industrial": 0.0007410049438476562, "__label__literature": 0.0003643035888671875, "__label__politics": 0.0004189014434814453, "__label__religion": 0.0005483627319335938, "__label__science_tech": 0.1893310546875, "__label__social_life": 9.572505950927734e-05, "__label__software": 0.0157012939453125, "__label__software_dev": 0.78173828125, "__label__sports_fitness": 0.0005249977111816406, "__label__transportation": 0.001251220703125, "__label__travel": 0.0002460479736328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52640, 0.02279]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52640, 0.28595]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52640, 0.83099]], "google_gemma-3-12b-it_contains_pii": [[0, 2619, false], [2619, 5558, null], [5558, 8349, null], [8349, 12073, null], [12073, 14387, null], [14387, 18366, null], [18366, 21815, null], [21815, 25717, null], [25717, 29444, null], [29444, 33478, null], [33478, 36622, null], [36622, 40113, null], [40113, 43800, null], [43800, 47126, null], [47126, 49821, null], [49821, 52640, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2619, true], [2619, 5558, null], [5558, 8349, null], [8349, 12073, null], [12073, 14387, null], [14387, 18366, null], [18366, 21815, null], [21815, 25717, null], [25717, 29444, null], [29444, 33478, null], [33478, 36622, null], [36622, 40113, null], [40113, 43800, null], [43800, 47126, null], [47126, 49821, null], [49821, 52640, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52640, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52640, null]], "pdf_page_numbers": [[0, 2619, 1], [2619, 5558, 2], [5558, 8349, 3], [8349, 12073, 4], [12073, 14387, 5], [14387, 18366, 6], [18366, 21815, 7], [21815, 25717, 8], [25717, 29444, 9], [29444, 33478, 10], [33478, 36622, 11], [36622, 40113, 12], [40113, 43800, 13], [43800, 47126, 14], [47126, 49821, 15], [49821, 52640, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52640, 0.06762]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
abe8c82f8eca2eed97462ecb921800ca994710f2
A Fast Analysis for Thread-Local Garbage Collection with Dynamic Class Loading Richard Jones University of Kent, Canterbury U.K. R.E.Jones@kent.ac.uk Andy C. King * Microsoft Corporation, Redmond, U.S.A. andy.c.king@gmail.com Abstract Long-running, heavily multi-threaded, Java server applications make stringent demands of garbage collector (GC) performance. Synchronisation of all application threads before garbage collection is a significant bottleneck for JVMs that use native threads. We present a new static analysis and a novel GC framework designed to address this issue by allowing independent collection of thread-local heaps. In contrast to previous work, our solution safely classifies objects even in the presence of dynamic class loading, requires neither write-barriers that may do unbounded work, nor synchronisation, nor locks during thread-local collections; our analysis is sufficiently fast to permit its integration into a high-performance, production-quality virtual machine. 1. Motivation Server applications running on multiprocessors are typically long-running, heavily multi-threaded, require very large heaps and load classes dynamically. Stringent demands are placed on the garbage collector [19] for good throughput and low pause times. Although pause times can be reduced through parallel (GC work divided among many threads) or concurrent (GC threads running alongside mutator1 threads) techniques, most GC techniques require a ‘stop the world’ phase during which the state of mutator threads is captured by scanning their stacks for references to heap objects. Unless the stack is scanned conservatively [7], the virtual machine must provide stack maps that indicate which stack frame slots hold heap references. Stack maps are typically updated only at certain GC points (allocation sites, method calls, backward branches and so forth) in order to reduce storage overheads; it is only safe to collect at these points. In a multi-threaded environment, all threads must be at GC safe points before a GC can start. Virtual machines that manage their own threads [3], use custom architectures [10], or make every instruction a GC point [31], ensure that thread switching only occurs at GC safepoints; here, it is only necessary to synchronise a few processors rather than many mutator threads. However, for efficiency, most commercial Java virtual machines map Java threads to native threads, which can be switched at any instruction. Here, each thread must be rolled forward to a safe point (by either polling or code patching [1]). The cost of this synchronisation for heavily multi-threaded programs is considerable and proportional to the number of mutator threads running (rather than the number of processors). For example, thread suspension in the VolanoMark client [32] incurs up to 23% of total GC time spent. Table 1 shows the average and total time to suspend threads for GC (columns 2, 3), the average and total GC time (4, 5), the total elapsed time (6) and suspension as a fraction of GC and elapsed time (7, 8). However, many objects are accessed by only a single thread [8, 9, 33, 25, 2, 6]. Table 2 shows the number and volume of shared objects and all objects (2–5), and hence the fraction that are never accessed outside their allocating thread (6, 7). <table> <thead> <tr> <th>Threads</th> <th>Suspend time avg</th> <th>total</th> <th>GC time avg</th> <th>total</th> <th>Runtime total</th> <th>Suspend as % of GC</th> <th>Run</th> </tr> </thead> <tbody> <tr> <td>1024</td> <td>6</td> <td>1351</td> <td>30</td> <td>7389</td> <td>15384</td> <td>18.28</td> <td>8.78</td> </tr> <tr> <td>2048</td> <td>13</td> <td>4198</td> <td>57</td> <td>17992</td> <td>35596</td> <td>23.33</td> <td>11.79</td> </tr> <tr> <td>4096</td> <td>30</td> <td>12200</td> <td>136</td> <td>56124</td> <td>81746</td> <td>21.74</td> <td>14.92</td> </tr> </tbody> </table> Table 1: Thread-suspension and GC time vs. total runtime for the VolanoMark client (times in milliseconds). The insight behind our work is that, if objects that do not escape their allocating thread are kept in a thread-specific region of the heap, that region can be collected independently of the activity of other mutator threads: no global rendezvous is required. Further, independent collection of threads may also allow better scheduling. Given appropriate allocation of heap resources between threads, it is no longer necessary to suspend all mutator threads because a single thread has run out of memory. The contributions of this work are a new compile-time escape analysis and GC framework for Java. The output... of the analysis drives a bytecode to bytecode transformation in which methods are specialised to allocate objects into thread-specific heaplets or the shared heap as appropriate; these methods are then JIT-compiled on demand in the usual way. - The analysis can classify objects even if parts of the program are unavailable (in contrast to [25, 30]). - The system is safe in the presence of dynamic class loading; for our benchmarks, it is effective. - It requires neither synchronisation nor locks for local collections (in contrast to [30]). - It does not require a write-barrier that may do an unbounded work (in contrast to [14]). - It uses less time and space than other analyses that accommodate dynamic class loading [18]. It is sufficiently fast to make incorporation into a production JVM (Sun’s ExactVM for Solaris) realistic. Most analyses that act on partial programs generate worst-case solutions for unavailable fragments. In contrast, our system generates best-case, yet still safe, solutions. Only if and when a class is loaded that invalidates a solution does our system retreat to the synchronisation status quo, and then only for threads that might use this class. In practice, such badly-behaved classes are rare: hence we claim it is effective. Our goal is a compile-time heap partitioning that allows a region (not necessarily contiguous) of the heap associated with a user-level thread to be collected without suspending, or otherwise synchronising with, other user-level threads. We require (a) a heap structure that permits independent collection of regions, (b) a bytecode escape analysis that classifies object allocation sites according to whether those objects are shared between threads, and (c) a bytecode transformation to specialise and rewrite methods appropriately. We discuss each below. ### 2. Related Work A GC can only determine a thread’s roots when it is in a consistent state. If systems that use their own non-preemptive threads [3] switch thread contexts only at GC points, no synchronisation between threads running on a single processor is needed for GC. Custom architectures that allow native threads to switch only at certain machine instructions (which are GC points) [10] similarly require no intra-processor synchronisation. In both cases, synchronisation is needed only between processors. In contrast, for an on-the-fly reference-counting collector, Paz et al. show how threads’ state may be gathered one at a time [24]. However, most JVMs use native threads, which must all be stopped at GC points. Ageson [1] compares polling and code patching techniques for rolling threads forward to such GC points. Stichnoth et al. [31] suggests that stack maps can be compressed sufficiently to allow any instruction to be a GC point, but this does not address the other advantages of being able to collect thread-local heaps independently. Several authors have proposed thread-local heap organisations. Doliguez et al. [13, 12] describe a heap architecture that takes advantage of ML’s distinction of mutable from immutable objects. The latter are placed in local, young generation heaps while the former and those referenced by global variables are placed in the shared, old generation heap: there are no references between local heaps. Local, young generation collections are performed independently. ML does not support dynamic code loading. Steensgaard [30] divides the heap into a shared old-generation and separate thread-specific, young-generations. His escape analysis segregates object allocation sites according to whether the objects that they allocate may become reachable both from some global variable and by more than one thread. He does not support dynamic class loading. Unfortunately, because all static fields are considered as roots for a local region, collection of thread-specific heaps requires a global rendezvous, only after which may each thread complete independent collection of its own region. In contrast, our system requires neither locks nor global rendezvous for thread-local collection. A run-time alternative is to use a write barrier to trap pointers to objects in local regions as they are written into objects in the shared heap, and to mark as global, or copy to a shared region, the target and its transitive closure [14]. When a thread triggers an independent collection, the mark-phase traverses and the sweeper reclaims only the thread’s local objects. The primary drawback to this approach is the unbounded work performed by the write-barrier to traverse structures (although this need only be performed once for any object, since global objects cannot revert back to local). Hirzel et al. [18] describe an Anderson [4] pointer analysis that supports all Java features including dynamic class loading. The memory and runtime costs of their analysis are significantly larger than ours, although comparisons between our JVMs are hard to draw. <table> <thead> <tr> <th>Threads</th> <th>Global objects</th> <th>MB</th> <th>Total objects</th> <th>MB</th> <th>% Local objects</th> <th>MB</th> </tr> </thead> <tbody> <tr> <td>1024</td> <td>761669</td> <td>36</td> <td>1460156</td> <td>80</td> <td>48</td> <td>55</td> </tr> <tr> <td>2048</td> <td>1627826</td> <td>77</td> <td>3062130</td> <td>164</td> <td>47</td> <td>54</td> </tr> <tr> <td>4096</td> <td>3669666</td> <td>168</td> <td>6623630</td> <td>345</td> <td>45</td> <td>52</td> </tr> </tbody> </table> Table 2: Fraction of objects that remain local throughout their entire life in the VolanoMark client. 3. Heap structure We partition the heap into a single shared heaplet and many thread-local heaplets. Other heap organisations may be laid over the heaplets layer (e.g. a heaplet may hold several generations, or the older generation may be held in the shared heaplet): we do not discuss this here. Our requirement for independent local collection of heaplets means that threads should scan only their local roots: global variables are prohibited from referencing objects in a thread-local region. Note this definition is more conservative than that of [30] since all objects reachable from static fields now escape. However, it concurs with those of [9, 33], both of which obtain good results for typical Java programs. If dynamic class loading is forbidden, objects can be proven either local, along all execution paths, from their creation until their death, or potentially shared by more than one thread. As all methods are available at analysis time, complete type information is available; hence the set of all possible types of a receiver object and the set of its invocable methods may be calculated. However, Java permits new classes to be loaded at run-time, so it is impossible to determine precisely the type of the receiver nor the set of method targets for a given invocation. Consequently, objects passed as parameters to methods of ambiguous receivers cannot be proved to be strictly local for all (future) paths of execution, yet the conservative solution [9, 33] of treating as global all actual parameters of yet to be loaded methods is undesirable. Instead, our partial-world analysis takes a snapshot of the system at some point in the program’s execution. This captures all classes so far loaded and resolved by the virtual machine. Objects are classified as strictly local (L), optimistically local (OL) or global (G). - **Strictly local** objects are provably local, for all execution paths, regardless of which classes may be loaded in the future. They are placed in per-thread local heaplets. - **Optimistically local** objects are determined to be local at the time of the snapshot but may escape if passed a method of a class loaded in the future. They are allocated into per-thread optimistically local heaplets. - **Global** objects are (potentially) shared in the current snapshot. They are allocated in the shared heap. To ensure that a heaplet is dependent only on its owning thread for collection, and never on another thread or any roots in the shared heap, references are prohibited from OL to L heaplets, from one thread’s heaplets to those of another thread, and from shared objects to L or OL ones (Figure 1). Let $T$ be a thread instance, with $T_L$ and $T_{OL}$ its L and OL heaplets, $T_S$ its stack and $G$ the shared heap, $x$ and $y$ storage locations, where a location may be in either a heaplet or the shared heap, and let $\rightarrow$ be a reference between two locations; consider $T_S \subset T_L$. The following invariants must be preserved: **Inv. 1.** $\forall y \in T_L \cdot if \ x \rightarrow y \ then \ x \in T_L or x = T.$ **Inv. 2.** $\forall y \in T_{OL} \cdot if \ x \rightarrow y \ then \ x \in T_{OL} \cup T_L or x = T.$ **Inv. 3.** $\forall y \in G \cdot if \ x \rightarrow y \ then \ x \in G \cup T_{OL} \cup T_L.$ 3.1. Dynamic class loading After the analysis, an OL object is treated as if it were local until a new class is loaded that potentially causes it to become shared. A thread’s local collection will collect both its OL and L heaplets but G objects will neither be traversed nor reclaimed. Hence, despite only partial knowledge of the program, a best-case solution to the independent collection of objects is provided. Classes loaded after the snapshot analysis has completed are analysed as they are loaded. The analysis must process the methods of the new class and determine which existing call-sites may call methods of the new class (virtual dispatch). If the analysis indicates that a previously OL parameter is passed to a new method that causes it to become shared, then the new class is termed non-conforming. As it is not practical to track changes in escapement at the level of individual objects, such changes are tracked at the heaplet level. Loading a non-conforming class causes the OL heaplet of any thread that might use the class to be treated as global. Note that L objects of such a ‘compromised’ thread can never become shared: L heaplets can always be collected independently. On the other hand, in the absence of repeating the complete analysis, this OL heaplet can henceforth be collected only alongside the shared heap. 3.2. Technical details How should objects allocated before the snapshot be handled? They would have been placed in the shared heap, regardless of their escapement. If actually L or OL, these objects may later be updated to refer to objects in an L or OL heaplet but this does not break Inv. 1 or 2. Although allocated physically in the shared heap, a logically local object cannot be reached by any thread other than its own (which is blocked) so it is safe for the local GC to update its fields or to move the object into the local heaplet to which it holds a reference. On the other hand, any logically local object in the shared heap which holds a reference into a heaplet must be treated as a root of that heaplet. Such references are trapped and recorded by write barrier (as for generational collectors). Thread objects themselves need special care. It would be unsound to allocate a Thread within its own heaplet since the method creating the thread would then hold a cross-heaplet reference. Instead, we place the Thread physically in the shared heap and associate it with its heaplet. It is treated specially as a root for a local collection (x = T in Inv. 1 and 2) but is neither moved nor are any of its shared fields updated by thread-local GCs, thereby avoiding any races. 4. Escape Analysis Our analysis is a Steensgaard [29], flow-insensitive, context-sensitive, partial program, compositional, escape analysis. Steensgaard analyses merge both sides of assignments, giving equal solutions, in contrast to Anderson analyses [4]. The latter pass values from the right- to the left-hand side of assignments and so offer greater precision, but their time and space cost is significantly greater [17, 16]. The improvement of flow-sensitive analyses has been found to be small in practice despite a two-fold increase in analysis time [17]. Flow-insensitive analyses perform well, despite reduced precision for local variables, because the solution for a method depends strongly on the calling context. An alias is a storage location (global or local variable, parameter...) that refers to a second location, typically an object on the heap. The goal of alias analysis is to determine an approximation of the aliases of a given location [17]; precise points-to analyses is undecidable [21]. The results of an alias analysis are typically points-to graphs or alias sets. Escape analysis is an application of alias analysis. By determining the aliases (at all points in a program’s execution) of an object, and hence computing the methods and threads to which those aliases are visible, escape analysis determines those objects that cannot escape their allocating method or thread. Our analysis is a development of Ruf and Steensgaard [25, 30]. We group potentially aliased expressions into equivalence classes and construct polymorphic method summaries that can be reused at different call sites. The algorithm is thus context-sensitive and flow-insensitive: it does not require iteration to a fixed point. Although, in the worst-case, time and space complexity are exponential, these analyses are fast in practice. Unlike Ruf-Steensgaard, our algorithm is compositional: any class loaded after a partial analysis of a snapshot of the program is also analysed (both to check conformance, i.e. that no execution of any method of this class could infringe the pointer-direction invariants, and for specialisation opportunities) and incorporated into the system. Support for dynamic class loading is achieved by presuming fields and method parameters to be OL rather than L, unless proven otherwise. Our analysis deems only those objects that do not escape their allocating method to be L. 4.1. Terminology Over the execution of a program, a variable may hold references to many storage locations: its alias set AS models this set of locations. In addition, AS contains a fieldMap from the names of the fields of objects referenced by the variable to their alias sets. All elements of an array are represented by a single value called ELT. Alias sets also contain a sharing attribute (L ⊆ OL ⊆ G), indicating their escapement. Alias sets for two variables may be merged (Figure 2). \[ \text{Merge}(a, b) = \text{lub}(a.\text{sharing}, b.\text{sharing}) \cup a.\text{fieldMap} \cup b.\text{fieldMap} \\ \forall (f, a_i) \in a.\text{fieldMap}, \forall (g, b_i) \in b.\text{fieldMap} \\ \text{if } (f = g) \text{ Merge}(a_i, b_i) \\ \text{Delete}(b) = b := a \] Figure 2: Alias set merger. lub is the least upper bound of the sharing attributes. Method arguments are modelled by alias contexts, a tuple of the alias sets of the method receiver o, the parameters p, the return value r and an exception value e. \( (o, p_1 \ldots p_n, r, e) \) Site contexts hold the actual parameters at a call-site, while method contexts hold the formal parameters of a method. 4.2. The Snapshot phase The algorithm operates in 4 major phases: Snapshot, Post-snapshot, Stop-the-world and On-demand. Once the snapshot and post-snapshot phases are complete, bytecode for specialised versions of methods is generated. To avoid races between specialisation routines and the ordinary execution of the JVM, the concurrent snapshot phases are followed by a once-only stop-the-world phase in which specialisation and code patching is completed. The analysis runs in a background thread which sleeps for a user-specifiable period of time in order to delay analysis until a reasonable number of classes have been loaded. By delaying, the analysis is given access to more knowledge of the program, which reduces the chance of a class loaded in the future being non-conforming. Note that we expect most classes loaded to conform as it would be unusual for a sub-class to allow an object to escape its thread (for example, by referencing it from a static field) when its parent did not; a possible scenario might be that a logging method is performing unexpectedly. The snapshot phase is entered at some arbitrary point in execution in order to analyse all classes loaded at that point. After this phase, classes are analysed on-demand as they are loaded: any classes loaded while processing the snapshot are treated as post-snapshot. Analysis in both phases is divided into a sequence of passes (Table 3). ### Table 3: Order of snapshot analysis passes <table> <thead> <tr> <th>Pass</th> <th>Description</th> <th>Traversal</th> </tr> </thead> <tbody> <tr> <td>Merge</td> <td>Merge alias sets</td> <td>Top-down</td> </tr> <tr> <td>Call graph construction</td> <td>Identify potential method targets</td> <td>Any</td> </tr> <tr> <td>Thread Analysis</td> <td>Find shared fields of threads</td> <td>Any</td> </tr> <tr> <td>Unification</td> <td>Unify site and method contexts</td> <td>Bottom-up</td> </tr> <tr> <td>Specialisation</td> <td>Specialise by calling context</td> <td>Top-down</td> </tr> </tbody> </table> The snapshot phase is entered at some arbitrary point in execution in order to analyse all classes loaded at that point. After this phase, classes are analysed on-demand as they are loaded: any classes loaded while processing the snapshot are treated as post-snapshot. Analysis in both phases is divided into a sequence of passes (Table 3). ### Table 3: Order of snapshot analysis passes <table> <thead> <tr> <th>Statement</th> <th>Action</th> </tr> </thead> <tbody> <tr> <td>( v_0 = v_1 )</td> <td>Merge((AS(v_0), AS(v_1)))</td> </tr> <tr> <td>( v_0 = v_1.f )</td> <td>Merge((AS(v_0), AS(v_1).fieldMap(f)))</td> </tr> <tr> <td>( v_0 = v_1[n] )</td> <td>Merge((AS(v_0), AS(v_1).fieldMap(E2.7)))</td> </tr> <tr> <td>( v = \text{new } C )</td> <td>Merge((AS(v), AS(\text{new } C)))</td> </tr> <tr> <td>( v = \text{new } C[n] )</td> <td>Merge((AS(v), AS(\text{new } C[n])))</td> </tr> <tr> <td>return ( v )</td> <td>Merge((AS(v), r))</td> </tr> <tr> <td>throw ( v )</td> <td>Merge((AS(v), e))</td> </tr> <tr> <td>( v = p(v_0,...,v_{n-1}) )</td> <td>none</td> </tr> </tbody> </table> **Figure 3:** Rules for the merge pass. The **Merge pass** constructs an equality-based, intra-procedural analysis of each method by merging the alias sets of all values in a statement, propagating escapement throughout the method (Figures 2 and 3). As alias sets are merged (and matching fields merged transitively), the least upper bound of the *sharing* attributes of the sets is computed. Following the merger, the data structure for the second set can be reclaimed. In order to avoid repeating work, a red-black tree is used to track pairs of alias sets passed to **Merge**. Note that, to preserve context-sensitivity, this pass does not merge the aliases of site and method contexts (thus methods may be processed in any order). **Call-graph construction** Following the merger of alias sets, a type analysis is performed on receiver objects to estimate the set of potential method targets. Methods are processed one at a time, which makes the analysis conservative. The alternative — propagation of types across method calls, and consequent changing of types in that graph — would require expensive iteration to a fixed point. The imprecision of type information for formal parameters (which might be used as receivers for method invocations whose actual parameters escape) requires that they be treated conservatively and marked as *ambiguous*. An *ambiguous statement* is one with a receiver of an ambiguous type, for which the analysis cannot determine exactly the possible set of method targets. To resolve invocation statements, the analysis examines the kind of the invocation. If it is static, then the only possible method target is that specified in the constant pool of the current class [22]. Its entry in the pool contains the name and signature of the method and also the name of the exact class in which it resides. If the invocation is special, there is also only one target (unless specific conditions are met that make the call virtual [22]). For virtual and interface invocations, however, the target depends on the runtime type of the receiver: potentially each class in the receiver’s alias set could contain a method target. If the receiver is not a formal parameter but of a known type, then the set of classes is given by its aliases (including the superclass, to accommodate dynamic dispatch — subclasses need not be considered). The analysis must simply search each class for methods with matching names and signatures. Ambiguous invocations, however, may call methods in existing or future subclasses. A *Rapid Type Analysis* similar to [5] is used to prune the set of potential method targets to only those of classes that have been instantiated. Targets of static and special invocations, however, are added unconditionally. Care is taken with calls to methods that are not yet loaded, or were loaded during the snapshot — the latter are listed in a *post-snapshot queue* — by treating them as if they could cause objects to escape. The analysis marks statements as ambiguous when given a method target in a class outside the snapshot; all non-global aliases in the invocation statement’s site context are marked as OL. **Figure 3:** Rules for the merge pass. **The Merge pass** constructs an equality-based, intra-procedural analysis of each method by merging the alias sets of all values in a statement, propagating escapement throughout the method (Figures 2 and 3). As alias sets are merged (and matching fields merged transitively), the least upper bound of the *sharing* attributes of the sets is computed. Following the merger, the data structure for the second set can be reclaimed. In order to avoid repeating work, a red-black tree is used to track pairs of alias sets passed to **Merge**. Note that, to preserve context-sensitivity, this pass does not merge the aliases of site and method contexts (thus methods may be processed in any order). **Call-graph construction** Following the merger of alias sets, a type analysis is performed on receiver objects to estimate the set of potential method targets. Methods are processed one at a time, which makes the analysis conservative. The alternative — propagation of types across method calls, and consequent changing of types in that graph — would require expensive iteration to a fixed point. of the newly created thread outside the method, leading to the more expensive solution described previously. This potentially restricts the set of programs that can be optimised. The Thread Analysis pass traverses the call graph, start- ing from the main method, keeping track of the current thread (initially the implicit main thread, MT), which is set as each encountered method’s invoking thread. When a RunnableRun or ThreadRun statement is encountered, the alias of the thread instance stored in the state is used as the current thread and the call-graph is walked from the respective run method, adding the thread alias to each method’s set of invoking threads. (Note that we identify a thread with its Runnable object o and call it the runtime owner of object o.) An alias set a’s sharing is set to be G if the traversal reaches a with a current thread different to that of the runtime owner (for any field in a). The Unification pass is inter-procedural, traversing the call-graph in bottom-up topological order, propagating es- capement. At each call-site, sharing attributes are pulled from the formal parameters of each method context to the actual parameters in the site context; details are given in Figures 4 and 5. Unify takes the alias sets of the actual and the formal parameter and stores the least upper bound of their sharing attributes in the former. Unlike the merge pass, any fields of the formal parameter that are not fields of the actual parameter are cloned on the fly and added to the latter’s field-map, in order to propagate escapement (rather than join alias sets across method calls which would lose context-sensitivity). To make the analysis iterative (rather than using fixed-point methods), the contexts of recursive calls are merged rather than unified, as per [25]. The Specialisation pass is a top-down pass which in- troduces context sensitivity, specialising methods according to calling context. Sharing attributes cannot be simply pushed across calls into method contexts (for this would lose context-sensitivity) but the site and method context of each target must be compared (see Figure 6). If they match, the target is walked as-is. Otherwise, the site context has worse escapement than the method and so, unless an ap- propriate specialisation already exists, the target method is specialised and this specialisation is added to the method’s list of specialisations. Note that, in the snapshot phase, es- capement at site contexts is guaranteed to be no better than that of the method contexts. Finally in the snapshot phase, the analysis may en- counter unresolved targets for which it cannot compare contexts. These invocations are flagged as ambiguous and any non-G alias sets in the site context are marked as OL. If the class is later loaded, the analysis can examine its methods starting from their callers and determine whether method contexts differ from those in each site context. If the escapement is worse, OL objects have become shared and the analysis must fix the OL heaplets. If it is better, the analysis can specialise the method and patch the speciali- sation call into the caller. On completion of the snapshot phase, all classes in the snapshot have been processed, and the interpreter and JIT- compiler are in a position to create specialised methods that allocate into appropriate heaplets. 4.3. Post-snapshot phase So far the analysis has known only of those classes in the snapshot queue. It has treated others, even if loaded and resolved while the snapshot analysis was running, con- servatively. These classes are now processed one at a time, applying the complete analysis to each before considering the next. Call-graph traversal graph differs from that of the snapshot phase. The call-graph may be large, so the post-snapshot analysis walks methods of new classes only from their callers (which were recorded during the snapshot phase). Note that the list of classes to be processed must include superclasses and any interfaces implemented. If a new method may override one in the snapshot, callers of the overridden method are added to the new method’s set of potential callers. Using this set, the analysis can walk methods starting from all their potential callers and thus avoid a potentially costly walk of the entire call-graph. When walking from callers, we have no implicit MT starting thread and so must rely on all threads that could possibly invoke a method (recorded during thread analysis phase). Thus, given a caller method, the analysis must walk the sub-graph once for each thread by which it can be invoked, passing the appropriate thread along the graph each time. The analysis must also add the new methods as targets of invocation statements of their callers. Note that previously omitted methods that override those in already analysed superclasses can now be added as virtual invocation targets: the call-graph is made more accurate with each class processed. Unification proceeds similarly to that of the snapshot phase but stops short of unifying the site contexts from whence the walk started (as this would change their escapement and hence that of their caller, and so on; their specialisations have already been created). Instead, we rely on the next pass to compare contexts and specialise or compromise threads as necessary. Statement Action \[ v = p(v_0, \ldots, v_{n-1}) \] \[ sc := (AS(v_0), \ldots, AS(v_{n-1}), AS(v), e) \] \[ \forall p_i \in TARGETS(p, v_0) \] \[ mc := MC(p, ) \] \[ escaping := \{ \} \] \[ \text{case CompareAliasContextsPS(sc, mc, escaping) of} \] \[ \text{Worse:} \] \[ \text{CreateSpec}(p, sc) \] \[ \text{Better:} \] \[ \forall a_i \in \text{escaping} \] \[ \forall v_i \in \text{VALUES}(a_i) \] \[ \text{FIX} := \text{FIX} \cup \{\text{ALLOCATOR}(v_i)\} \] Figure 7: Specialisation rules for method invocation (post-snapshot). \textit{escaping} is the set of escaping alias sets, it is incremented by \textit{CompareAliasContextsPS}; \textit{VALUES}(a) is the set of all values in alias set \( a \); \textit{FIX} is the set of threads whose OL heaplets are compromised. Specialisation also starts from the call-sites in the caller methods. It compares site and method contexts: those that match need no further processing other than to continue the top-down traversal. Sites with worse escapement than that of their new targets cause specialisation of the new targets. However, the third outcome — that the escapement of actual parameters is better than that of formal parameters — is now possible since the previous pass did not unify contexts. In this case, the new class is non-conforming and some object has become shared (potentially). The aliases in the site context are guaranteed to be OL (or G) because the statement was marked ambiguous in the snapshot phase. Thus, the thread that allocated the object is now compromised and its OL heaplet must be treated as shared. 4.4. The Stop-The-World phase. Once the post-snapshot analysis has completed processing all new classes, all threads (including recompilation, finaliser and garbage collector threads) are suspended in order to avoid races. Specialisations of the methods of all classes are completed and, for each, its method block — the structure within the virtual machine that represents a Java method — is cloned. Some fields, such as the method signature, exception table and debug structures can be shared, while bytecode blocks of methods are copied in their entirety to allow modification of their invocation and allocation opcodes. The invocation opcodes are patched to invoke further specialisations, while the allocation opcodes are patched to allocate into the appropriate heaplet (L or OL). Note that, for methods which have already been compiled, we can also patch the JIT generated code directly in order to avoid allocating L and OL objects in the shared heap, which burdens the inter-region remembered sets. Finally, the OL heaplets of compromised threads are marked as shared, so that they are precluded from thread local collections. 4.5. On-demand analysis The virtual machine is now running specialised methods, and local heaplets have been created and are in use. Any classes loaded after the the analysis has completed and methods have been patched are analysed as part of loading. Here, the analysis runs in the thread loading the class, after the class and any superclasses have been loaded but before they are added to the class table (so application threads are prevented from resolving and using the new class until the analysis is complete). The analysis of the class is performed as for those on the post-snapshot queue, but the comparison of alias sets now also generates a set of escaping alias sets. As in the Post-snapshot phase, non-conforming classes, i.e. classes that cause OL objects to become shared are identified (see Figure 7). These are actual parameter objects in a method of an existing class that, when passed into a method of the new class, become reachable from outwith their creating thread or from a global variable. The allocating threads of such objects are compromised and so their OL heaplets are set to be collected alongside the shared heap, rather than independently with their L heaplet (which can never be compromised). Note that the requirement to preserve site and method contexts for this purpose means that many analysis data structures cannot be discarded as it would be expensive to reconstruct them. This imposes a considerable memory overhead as they consume part of the C heap for the lifetime of the application; the Java heap is unaffected. 5. Analysis Evaluation For the results given below, we generate all specialisations required. We discuss options for patching and linking the specialisations in Section 6. Here, we evaluate our analysis in terms of its time and space costs, the escapement of allocation, code ‘bloat’ due to additional, specialised methods, and the potential for compromised threads. We do not consider here the effects on thread synchronisation time, collection time, the overall performance of applications, nor the usage of the Java heap. All measurements were taken on a lightly loaded Sun Ultra 60, with two 450MHz UltraSparc-II processors sharing 512MB of memory, the Solaris 8 operating system, running Sun’s EVM\(^2\). Results for two small single-threaded SPECjvm98 benchmarks [27] (\_201\_compress and \_213\_javac) are included simply for comparison. VolanoMark\[^3\], a client-server architecture for online chat rooms, is representative of large, long-running applications. The benchmark was run in configurations with 32, 256 and 2048 threads. SPECjbb2000 [28] represents multi-threaded three-tier transaction systems. Two configurations were used, both of which operate on a single warehouse (roughly 25MB of live data) but vary the number of threads: \_jbb-1 uses 1 and \_jbb-4 4 threads. Six runs were performed for each test, the first being used as a warm-up. The best result from the remaining five was then selected. <table> <thead> <tr> <th>Benchmark</th> <th>Threads</th> <th>EVM</th> <th>EVM+analysis</th> </tr> </thead> <tbody> <tr> <td>compress</td> <td>1</td> <td>39 s</td> <td>40 s</td> </tr> <tr> <td>javac</td> <td>1</td> <td>35 s</td> <td>35 s</td> </tr> <tr> <td>vol-100</td> <td>32</td> <td>7456 mps</td> <td>7121 mps</td> </tr> <tr> <td>vol-128</td> <td>256</td> <td>5894 mps</td> <td>5895 mps</td> </tr> <tr> <td>vol-1024</td> <td>2048</td> <td>2976 mps</td> <td>2992 mps</td> </tr> <tr> <td>_jbb-1</td> <td>1</td> <td>864 tps</td> <td>878 tps</td> </tr> <tr> <td>_jbb-4</td> <td>4</td> <td>1363 tps</td> <td>1371 tps</td> </tr> </tbody> </table> Table 4: Benchmark timings and scores. Table 4 shows the baseline performance of the benchmarks without (column 3) and with (column 4) the analysis running in a background thread. The analysis has negligi- \(^2\)aka Java 2 SDK (1.2.1_05) Production Release for Solaris. \[^3\]PEC benchmark \[^4\]V olanoMark <table> <thead> <tr> <th>Benchmark</th> <th>Start (s.)</th> <th>Methods Resolved</th> <th>Local %</th> <th>OptLocal %</th> <th>Shared %</th> <th>Total (KB)</th> <th>Time (s)</th> </tr> </thead> <tbody> <tr> <td>compress</td> <td>15</td> <td>3009 2204</td> <td>16 3</td> <td>148 30</td> <td>314 67</td> <td>5432</td> <td>1.236</td> </tr> <tr> <td>javac</td> <td>13</td> <td>4260 3216</td> <td>26 2</td> <td>304 32</td> <td>600 66</td> <td>13438</td> <td>4.210</td> </tr> <tr> <td>vol-16</td> <td>10</td> <td>2951 2129</td> <td>12 3</td> <td>147 43</td> <td>184 54</td> <td>5096</td> <td>7.225</td> </tr> <tr> <td>vol-12B</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>vol-1024</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>jbb-1-T</td> <td>30</td> <td>5365 3776</td> <td>68 6</td> <td>549 48</td> <td>534 46</td> <td>31316</td> <td>17.742</td> </tr> <tr> <td>jbb-1-4</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Table 5: Object escapement at allocation sites. Figures are in number of allocation sites and as a percentage of the total. <table> <thead> <tr> <th>Benchmark</th> <th>Num. specs</th> <th>Bytecode (KB)</th> <th>Bloat (KB)</th> <th>Compiled (KB)</th> <th>Bloat (KB)</th> </tr> </thead> <tbody> <tr> <td>compress</td> <td>708</td> <td>91 29</td> <td>318</td> <td>311</td> <td></td> </tr> <tr> <td>javac</td> <td>1601</td> <td>173 61</td> <td>1356</td> <td>766</td> <td></td> </tr> <tr> <td>vol-X</td> <td>506</td> <td>82 17</td> <td>382</td> <td>240</td> <td></td> </tr> <tr> <td>jbb-1-X</td> <td>1129</td> <td>190 56</td> <td>1274</td> <td>729</td> <td></td> </tr> </tbody> </table> Table 6: Specialisations and bloat incurred for bytecode and compiled code. of the analysis dominates the additional space occupied by specialised method bytecodes and compiled instructions. Figure 8 shows plots of when classes are loaded by vol-1024 and jbb-4; the x-axis shows time, measured as usual in words allocated since launch. Each X on the plot indicates a class, while the two vertical bars mark the beginning (10 million words into the application for vol-1024) and end (roughly 17 million words) of the snapshot analysis. vol-1024 (Figure 8(a)) loaded several classes during the snapshot analysis, forcing them into the post-snapshot queue. It then loaded two classes almost half-way into the benchmark: java/lang/ref/Finalizer$1 and java/lang/ref/Finalizer$2. jbb-4 (Figure 8(b)) loaded no classes during the snapshot. In both cases, several classes from the SPEC harness’ reporting framework are loaded toward the end: most of these classes are members of the java.awt package. We suggest that this behaviour is a somewhat artificial contrivance of these benchmarking suites rather than a typical behaviour of a server application, and that our strategy of delaying the analysis should be generally effective. Figure 8: Class loading over time (in words allocated). Each X marks a class loaded. The beginning and end of the snapshot analysis are marked by the vertical bars. 6. Further work Specialisation has consequences for a class’s constant pool and virtual dispatch table (vtable). To allow efficient access, both are of a fixed size, determined at class load time, but our specialisations increase the size of the pool and add further entries to the vtable. Several solutions are possible. (a) Methods could be scanned at load time to determine the maximum number of specialisations possible; but this would cause exponential growth of the constant pool and vtable. (b) The constant pool and vtable could be expanded by a smaller, pre-determined factor, possibly dependent of the number and signature to the classes methods. Once the vtable was full, further specialisations would need to use the best existing match. (c) A second, shadow, constant pool and a separate spec vtable used only by our specialisations could be provided: this shadow constant pool is guaranteed to be fully resolved. An unfortunate consequence of this approach would be the addition of further levels of indirection for lookup of specialised methods. On the other hand, there is evidence to suggest that virtual method invocations are responsible for a significant number of data TLB misses [26] because the tables are created lazily as classes are loaded, and so are scattered sparsely about the heap. As the new spec vtables would be created together for all analysed classes, they can be packed tightly together onto a small number of pages, thereby minimising the chance of TLB or cache misses and offsetting the performance penalty of the extra invocation instructions. We intend to explore these options. We also plan a number of improvements both to the analysis and to the collector. Methods in dynamically loaded classes are only assumed to conform if their method contexts are identical to those of already loaded methods. Better conformance rules for dynamically loaded classes are almost certainly possible. Heap resources must be allocated carefully between threads in order to prevent one thread’s greed causing all threads to exhaust their heaplets: we intend to investigate appropriate policies and GC triggers, and how best to lay generations over the heaplet structure. 7. Conclusions We have presented a novel static analysis and garbage collector design that allows the heap to be divided into thread-specific heaplets that can be collected independently, thereby removing the need to synchronise all mutator threads for GC. The analysis can classify objects in presence of incomplete knowledge, and is sufficiently fast to make incorporation into a production JVM feasible. The system is safe, and generates best-case solutions, even in the presence of dynamic class loading; it requires neither synchronisation nor locks for local collections, nor a runtime write-barrier that may do an unbounded work. Acknowledgements This work was supported by the EPSRC, grant GR/R42252. We are also grateful to Steve Heller and Dave Detlefs of the Java Technology Group at Sun Microsystems Laboratories East for providing ExactVM, and Andy M. King for his helpful advice. Any opinions, findings, conclusions, or recommendations expressed in this material are the authors’ and do not necessarily reflect those of the sponsors. References
{"Source-Url": "https://kar.kent.ac.uk/49521/1/Jones_Fast_approaching_full_text.pdf", "len_cl100k_base": 10567, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38607, "total-output-tokens": 12032, "length": "2e13", "weborganizer": {"__label__adult": 0.0002989768981933594, "__label__art_design": 0.0002256631851196289, "__label__crime_law": 0.0002868175506591797, "__label__education_jobs": 0.0003142356872558594, "__label__entertainment": 4.106760025024414e-05, "__label__fashion_beauty": 0.00013446807861328125, "__label__finance_business": 0.00015211105346679688, "__label__food_dining": 0.0002598762512207031, "__label__games": 0.000446319580078125, "__label__hardware": 0.001041412353515625, "__label__health": 0.0003209114074707031, "__label__history": 0.00019729137420654297, "__label__home_hobbies": 8.165836334228516e-05, "__label__industrial": 0.0003120899200439453, "__label__literature": 0.0001665353775024414, "__label__politics": 0.000244140625, "__label__religion": 0.0003826618194580078, "__label__science_tech": 0.00817108154296875, "__label__social_life": 5.811452865600586e-05, "__label__software": 0.004108428955078125, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0002887248992919922, "__label__transportation": 0.0004825592041015625, "__label__travel": 0.00019216537475585935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49002, 0.03477]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49002, 0.37845]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49002, 0.90292]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4556, false], [4556, 9985, null], [9985, 15108, null], [15108, 20298, null], [20298, 26763, null], [26763, 30456, null], [30456, 35837, null], [35837, 38563, null], [38563, 43731, null], [43731, 49002, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4556, true], [4556, 9985, null], [9985, 15108, null], [15108, 20298, null], [20298, 26763, null], [26763, 30456, null], [30456, 35837, null], [35837, 38563, null], [38563, 43731, null], [43731, 49002, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49002, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49002, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4556, 2], [4556, 9985, 3], [9985, 15108, 4], [15108, 20298, 5], [20298, 26763, 6], [26763, 30456, 7], [30456, 35837, 8], [35837, 38563, 9], [38563, 43731, 10], [43731, 49002, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49002, 0.18085]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
e591998d0972f9841a17f0e441b115972c7822a2
CSC D70: Compiler Optimization Dataflow Analysis Prof. Gennady Pekhimenko University of Toronto Winter 2019 The content of this lecture is adapted from the lectures of Todd Mowry and Phillip Gibbons Refreshing from Last Lecture • Basic Block Formation • Value Numbering Partitioning into Basic Blocks • Identify the leader of each basic block – First instruction – Any target of a jump – Any instruction immediately following a jump • Basic block starts at leader & ends at instruction immediately before a leader (or the last instruction) 1) \( i = 1 \) 2) \( j = 1 \) 3) \( t_1 = 10 \cdot i \) 4) \( t_2 = t_1 + j \) 5) \( t_3 = 8 \cdot t_2 \) 6) \( t_4 = t_3 - 88 \) 7) \( a[t_4] = 0.0 \) 8) \( j = j + 1 \) 9) if \( j \leq 10 \) goto (3) 10) \( i = i + 1 \) 11) if \( i \leq 10 \) goto (2) 12) \( i = 1 \) 13) \( t_5 = i - 1 \) 14) \( t_6 = 88 \cdot t_5 \) 15) \( a[t_6] = 1.0 \) 16) \( i = i + 1 \) 17) if \( i \leq 10 \) goto (13) \( = \text{Leader} \) Value Numbering (VN) • More explicit with respect to VALUES, and TIME • each value has its own “number” – common subexpression means same value number • var2value: current map of variable to value – used to determine the value number of current expression \[ r1 + r2 \Rightarrow \text{var2value}(r1)+\text{var2value}(r2) \] Algorithm Data structure: VALUES = Table of expression // [OP, valnum1, valnum2] var // name of variable currently holding expression For each instruction (dst = src1 OP src2) in execution order valnum1 = var2value(src1); valnum2 = var2value(src2); IF [OP, valnum1, valnum2] is in VALUES v = the index of expression Replace instruction with CPY dst = VALUES[v].var ELSE Add expression = [OP, valnum1, valnum2] var = dst to VALUES v = index of new entry; tv is new temporary for v Replace instruction with: tv = VALUES[valnum1].var OP VALUES[valnum2].var CPY dst = tv; set_var2value (dst, v) VN Example Assign: a→r1, b→r2, c→r3, d→r4 \[ a = b+c; \quad \text{ADD } t1 = r2,r3 \] \[ \text{CPY } r1 = t1 \quad // (a = t1) \] \[ b = a-d; \quad \text{SUB } t2 = r1,r4 \] \[ \text{CPY } r2 = t2 \quad // (b = t2) \] \[ c = b+c; \quad \text{ADD } t3 = r2,r3 \] \[ \text{CPY } r3 = t3 \quad // (c = t3) \] \[ d = a-d; \quad \text{CPY } r2 = t2 \] Questions about Assignment #1 • Tutorial #1 • Tutorial #2 next week – More in-depth LLVM coverage Outline 1. Structure of data flow analysis 2. Example 1: Reaching definition analysis 3. Example 2: Liveness analysis 4. Generalization What is Data Flow Analysis? • **Local analysis (e.g., value numbering)** – analyze effect of each instruction – compose effects of instructions to derive information from beginning of basic block to each instruction • **Data flow analysis** – analyze effect of each basic block – compose effects of basic blocks to derive information at basic block boundaries – from basic block boundaries, apply local technique to generate information on instructions What is Data Flow Analysis? (2) • Data flow analysis: – Flow-sensitive: sensitive to the control flow in a function – intraprocedural analysis • Examples of optimizations: – Constant propagation – Common subexpression elimination – Dead code elimination What is Data Flow Analysis? (3) For each variable x determine: Value of x? Which “definition” defines x? Is the definition still meaningful (live)? Static Program vs. Dynamic Execution - **Statically**: Finite program - **Dynamically**: Can have infinitely many possible execution paths - **Data flow analysis abstraction**: For each point in the program: - combines information of all the instances of the same program point. - **Example of a data flow question**: - Which definition defines the value used in statement “b = a”? Effects of a Basic Block • Effect of a statement: \( a = b + c \) • \textbf{Uses} variables (b, c) • \textbf{Kills} an old definition (old definition of a) • new \textbf{definition} (a) • Compose effects of statements \(-\) Effect of a basic block – A \textbf{locally exposed use} in a b.b. is a use of a data item which is not preceded in the b.b. by a definition of the data item – any definition of a data item in the basic block \textbf{kills} all definitions of the same data item reaching the basic block. – A \textbf{locally available definition} = last definition of data item in b.b. Effects of a Basic Block A **locally available definition** = last definition of data item in b.b. \[ \begin{align*} t_1 &= r_1 + r_2 \\ r_2 &= t_1 \\ t_2 &= r_2 + r_1 \\ r_1 &= t_2 \\ t_3 &= r_1 \times r_1 \\ r_2 &= t_3 \\ \text{if } r_2 > 100 \text{ goto } L1 \end{align*} \] Locally exposed uses? $r_1$ Kills any definitions? Any other definition of $t_2$ Locally avail. definition? $t_2$ Reaching Definitions • Every assignment is a **definition** • A **definition** *d* **reaches** a point *p* if **there exists** path from the point immediately following *d* to *p* such that *d* is **not killed** (overwritten) along that path. • Problem statement – For each point in the program, determine if each definition in the program reaches the point – A bit vector per program point, vector-length = #defs Reaching Definitions (2) Every assignment is a definition A definition \(d\) reaches a point \(p\) if there exists path from the point immediately following \(d\) to \(p\) such that \(d\) is not killed (overwritten) along that path. Problem statement - For each point in the program, determine if each definition in the program reaches the point - A bit vector per program point, vector-length = \#defs - d2 reaches this point? Reaching Definitions (3) L1: if input() GOTO L2 d0: a = x d1: b = a d2: a = y GOTO L1 L2: ... d2 reaches this point? yes Data Flow Analysis Schema - Build a flow graph (nodes = basic blocks, edges = control flow) - Set up a set of equations between in[b] and out[b] for all basic blocks b - Effect of code in basic block: - Transfer function $f_b$ relates in[b] and out[b], for same b - Effect of flow of control: - relates out[b_1], in[b_2] if b_1 and b_2 are adjacent - Find a solution to the equations Effects of a Statement \[ \text{in}[B0] \] \[ \begin{align*} \text{d0: } & y = 3 & f_{d0} \\ \text{d1: } & x = 10 & f_{d1} \\ \text{d2: } & y = 11 & f_{d2} \end{align*} \] \[ \text{out}[B0] \] - \( f_s \): A transfer function of a statement - abstracts the execution with respect to the problem of interest - For a statement \( s \) (\( d: x = y + z \)) \[ \text{out}[s] = f_s(\text{in}[s]) = \text{Gen}[s] \cup (\text{in}[s]-\text{Kill}[s]) \] - \( \text{Gen}[s] \): definitions generated: \( \text{Gen}[s] = \{d\} \) - \( \text{Propagated} \) definitions: \( \text{in}[s] - \text{Kill}[s] \), where \( \text{Kill}[s] \)=set of all other defs to \( x \) in the rest of program Effects of a Basic Block - Transfer function of a statement $s$: - $\text{out}[s] = f_s(\text{in}[s]) = \text{Gen}[s] \cup (\text{in}[s]-\text{Kill}[s])$ - Transfer function of a basic block $B$: - Composition of transfer functions of statements in $B$ - $\text{out}[B] = f_B(\text{in}[B]) = f_{d_2} \cdot f_{d_1} \cdot f_{d_0}(\text{in}[B])$ - $= \text{Gen}[d_2] \cup (\text{Gen}[d_1] \cup (\text{Gen}[d_0] \cup (\text{in}[B]-\text{Kill}[d_0]))-\text{Kill}[d_1]))-\text{Kill}[d_2]$ - $= \text{Gen}[d_2] \cup (\text{Gen}[d_1] \cup (\text{Gen}[d_0] - \text{Kill}[d_1]) - \text{Kill}[d_2]) \cup \text{in}[B] - (\text{Kill}[d_0] \cup \text{Kill}[d_1] \cup \text{Kill}[d_2])$ - $= \text{Gen}[B] \cup (\text{in}[B] - \text{Kill}[B])$ - $\text{Gen}[B]$: locally exposed definitions (available at end of bb) - $\text{Kill}[B]$: set of definitions killed by $B$ Example - a **transfer function** $f_b$ of a basic block $b$: \[ \text{OUT}[b] = f_b(\text{IN}[b]) \] incoming reaching definitions $\rightarrow$ outgoing reaching definitions - A basic block $b$ - generates definitions: $\text{Gen}[b]$, - set of locally available definitions in $b$ - kills definitions: $\text{in}[b] - \text{Kill}[b]$, where $\text{Kill}[b]$=set of defs (in rest of program) killed by defs in $b$ - $\text{out}[b] = \text{Gen}[b] \cup (\text{in}(b)-\text{Kill}[b])$ Effects of the Edges (acyclic) - \( \text{out}[b] = f_b(\text{in}[b]) \) - Join node: a node with multiple predecessors - **meet** operator: \[ \text{in}[b] = \text{out}[p_1] \cup \text{out}[p_2] \cup \ldots \cup \text{out}[p_n], \text{where } p_1, \ldots, p_n \text{ are all predecessors of } b \] Example - out[b] = f_b(in[b]) - Join node: a node with multiple predecessors - **meet** operator: \[ \text{in}[b] = \text{out}[p_1] \cup \text{out}[p_2] \cup ... \cup \text{out}[p_n], \text{where} \] \[ p_1, ..., p_n \text{ are all predecessors of } b \] Cyclic Graphs • Equations still hold • \(\text{out}[b] = f_b(\text{in}[b])\) • \(\text{in}[b] = \text{out}[p_1] \cup \text{out}[p_2] \cup ... \cup \text{out}[p_n], p_1, ..., p_n \text{ pred.}\) • Find: fixed point solution Reaching Definitions: Iterative Algorithm input: control flow graph $\text{CFG} = (N, E, \text{Entry}, \text{Exit})$ // Boundary condition out[Entry] = ∅ // Initialization for iterative algorithm For each basic block $B$ other than Entry out[B] = ∅ // iterate While (Changes to any out[] occur) { For each basic block $B$ other than Entry { in[B] = $\bigcup$ (out[p]), for all predecessors $p$ of $B$ } } Reaching Definitions: Worklist Algorithm input: control flow graph $CFG = (N, E, \text{Entry}, \text{Exit})$ // Initialize \[ \text{out}[\text{Entry}] = \emptyset \quad \text{// can set out[Entry] to special def} \] \[ \text{out}[i] = \emptyset \quad \text{// if reaching then undefined use} \] For all nodes $i$ \[ \text{out}[i] = \emptyset \quad \text{// can optimize by out[i]=gen[i]} \] \[ \text{ChangedNodes} = N \] // iterate While $\text{ChangedNodes} \neq \emptyset$ { \[ \text{Remove } i \text{ from ChangedNodes} \] \[ \text{in}[i] = U(\text{out}[p]), \text{ for all predecessors } p \text{ of } i \] \[ \text{oldout} = \text{out}[i] \] \[ \text{out}[i] = f_i(\text{in}[i]) \quad \text{// out[i]=gen[i]U(in[i]-kill[i])} \] \[ \text{if (oldout} \neq \text{out}[i]) { \quad \text{for all successors } s \text{ of } i \quad \text{add } s \text{ to ChangedNodes} \} \] Example B1 - d1: i = n - d2: j = n - d3: a = u1 B2 - d4: i = i + 1 - d5: j = j - 1 B3 - d6: a = u2 B4 - d7: i = u3 <table> <thead> <tr> <th></th> <th>First Pass</th> <th>Second Pass</th> </tr> </thead> <tbody> <tr> <td>IN[B1]</td> <td>000 00 0 0</td> <td>000 00 0 0</td> </tr> <tr> <td>OUT[B1]</td> <td>111 00 0 0</td> <td>111 00 0 0</td> </tr> <tr> <td>IN[B2]</td> <td>111 00 0 0</td> <td>111 01 1 1</td> </tr> <tr> <td>OUT[B2]</td> <td>001 11 0 0</td> <td>001 11 1 0</td> </tr> <tr> <td>IN[B3]</td> <td>001 11 0 0</td> <td>001 11 1 0</td> </tr> <tr> <td>OUT[B3]</td> <td>000 11 1 0</td> <td>000 11 1 0</td> </tr> <tr> <td>IN[B4]</td> <td>001 11 1 0</td> <td>001 11 1 0</td> </tr> <tr> <td>OUT[B4]</td> <td>001 01 1 1</td> <td>001 01 1 1</td> </tr> <tr> <td>IN[exit]</td> <td>001 01 1 1</td> <td>001 01 1 1</td> </tr> </tbody> </table> Live Variable Analysis • Definition – A variable $v$ is live at point $p$ if • the value of $v$ is used along some path in the flow graph starting at $p$. – Otherwise, the variable is dead. • Motivation • e.g. register allocation ``` for i = 0 to n ... i ... ... for i = 0 to n ... i ... ``` • Problem statement – For each basic block • determine if each variable is live in each basic block – Size of bit vector: one bit for each variable Transfer Function • **Insight:** Trace uses backwards to the definitions an execution path control flow example ``` def IN[b] = f_b(OUT[b]) ``` d3: a = 1 d4: b = 1 d5: c = a d6: a = 4 • **A basic block b can** • **generate** live variables: $\text{Use}[b]$ – set of locally exposed uses in $b$ • **propagate** incoming live variables: $\text{OUT}[b] - \text{Def}[b]$ – where $\text{Def}[b]$ = set of variables defined in $b$ • **transfer function** for block $b$: $\text{in}[b] = \text{Use}[b] \cup (\text{out}(b) - \text{Def}[b])$ • $\text{in}[b] = f_b(\text{out}[b])$ • **Join node**: a node with multiple successors • **meet** operator: $$\text{out}[b] = \text{in}[s_1] \cup \text{in}[s_2] \cup ... \cup \text{in}[s_n],$$ where $$s_1, ..., s_n$$ are all successors of $b$ • in[b] = f_b (out[b]) • Join node: a node with multiple successors • meet operator: out[b] = in[s_1] U in[s_2] U ... U in[s_n], where s_1, ..., s_n are all successors of b Liveness: Iterative Algorithm input: control flow graph CFG = (N, E, Entry, Exit) // Boundary condition in[Exit] = ∅ // Initialization for iterative algorithm For each basic block B other than Exit in[B] = ∅ // iterate While (Changes to any in[] occur) { For each basic block B other than Exit { out[B] = U (in[s]), for all successors s of B in[B] = f_B(out[B]) // in[B]=Use[B] U (out[B]-Def[B]) } } Example - **B1** - \( d_1: i = n + 1 \) - \( d_2: i = n \) - \( d_3: a = u_2 \) - **B2** - \( d_4: i = i + 1 \) - \( d_5: j = j - 1 \) - **B3** - \( d_6: a = u_2 \) - **B4** - \( d_7: i = u_3 \) **First Pass** - OUT[entry] \( \{m,n,u_1,u_2,u_3\} \) - IN[B1] \( \{m,n,u_1,u_2,u_3\} \) - OUT[B1] \( \{i,j,u_2,u_3\} \) - IN[B2] \( \{i,j,u_2,u_3\} \) - OUT[B2] \( \{u_2,u_3\} \) - IN[B3] \( \{u_2,u_3\} \) - OUT[B3] \( \{u_3\} \) - IN[B4] \( \{u_3\} \) - OUT[B4] \( \{\} \) **Second Pass** - OUT[entry] \( \{m,n,u_1,u_2,u_3\} \) - IN[B1] \( \{m,n,u_1,u_2,u_3\} \) - OUT[B1] \( \{i,j,u_2,u_3\} \) - IN[B2] \( \{i,j,u_2,u_3\} \) - OUT[B2] \( \{u_2,u_3\} \) - IN[B3] \( \{u_2,u_3\} \) - OUT[B3] \( \{u_3\} \) - IN[B4] \( \{u_3\} \) - OUT[B4] \( \{i,j,u_2,u_3\} \) ## Framework <table> <thead> <tr> <th>Domain</th> <th>Reaching Definitions</th> <th>Live Variables</th> </tr> </thead> <tbody> <tr> <td>Direction</td> <td></td> <td></td> </tr> <tr> <td>forward:</td> <td>out(_b) = f(_b)(in(_b))</td> <td>back:</td> </tr> <tr> <td></td> <td>in(_b) = ∧ out[\text{pred}(b)]</td> <td>in(_b) = f(_b)(out(_b))</td> </tr> <tr> <td></td> <td></td> <td>out(_b) = ∧ in[\text{succ}(b)]</td> </tr> <tr> <td>Transfer function</td> <td>f(_b)(x) = \text{Gen}_b \cup (x - \text{Kill}_b)</td> <td>f(_b)(x) = \text{Use}_b \cup (x - \text{Def}_b)</td> </tr> <tr> <td>Meet Operation (\∧)</td> <td>\cup</td> <td>\cup</td> </tr> <tr> <td>Boundary Condition</td> <td>out[entry] = ∅</td> <td>in[exit] = ∅</td> </tr> <tr> <td>Initial interior points</td> <td>out(_b) = ∅</td> <td>in(_b) = ∅</td> </tr> </tbody> </table> Other examples (e.g., Available expressions), defined in ALSU 9.2.6 Thought Problem 1. “Must-Reach” Definitions • A definition $D$ ($a = b+c$) must reach point $P$ iff – $D$ appears at least once along on all paths leading to $P$ – $a$ is not redefined along any path after last appearance of $D$ and before $P$ • How do we formulate the data flow algorithm for this problem? Thought Problem 2: A legal solution to (May) Reaching Def? - Will the worklist algorithm generate this answer? Questions • **Correctness** - equations are satisfied, if the program terminates. • **Precision: how good is the answer?** - is the answer ONLY a union of all possible executions? • **Convergence: will the analysis terminate?** - or, will there always be some nodes that change? • **Speed: how fast is the convergence?** - how many times will we visit each node? Foundations of Data Flow Analysis 1. Meet operator 2. Transfer functions 3. Correctness, Precision, Convergence 4. Efficiency • Reference: ALSU pp. 613-631 • Background: Hecht and Ullman, Kildall, Allen and Cocke[76] A Unified Framework • Data flow problems are defined by • Domain of values: \( V \) • Meet operator \((V \land V \to V)\), initial value • A set of transfer functions \((V \to V)\) • Usefulness of unified framework • To answer questions such as correctness, precision, convergence, speed of convergence for a family of problems – If meet operators and transfer functions have properties \( X \), then we know \( Y \) about the above. • Reuse code Meet Operator • Properties of the meet operator • commutative: \( x \land y = y \land x \) • idempotent: \( x \land x = x \) • associative: \( x \land (y \land z) = (x \land y) \land z \) • there is a Top element \( T \) such that \( x \land T = x \) • Meet operator defines a partial ordering on values • \( x \leq y \) if and only if \( x \land y = x \) (\( y \rightarrow x \) in diagram) – Transitivity: if \( x \leq y \) and \( y \leq z \) then \( x \leq z \) – Antisymmetry: if \( x \leq y \) and \( y \leq x \) then \( x = y \) – Reflexitivity: \( x \leq x \) Partial Order - Example: let $\mathbf{V} = \{x \mid \text{such that } x \subseteq \{d_1, d_2\}\}$, $\land = \cap$ - Top and Bottom elements - Top $T$ such that: $x \land T = x$ - Bottom $\perp$ such that: $x \land \perp = \perp$ - Values and meet operator in a data flow problem define a semi-lattice: - there exists a $T$, but not necessarily a $\perp$. - $x, y$ are ordered: $x \leq y$ then $x \land y = x$ (y -> x in diagram) - what if $x$ and $y$ are not ordered? - $x \land y \leq x, x \land y \leq y$, and if $w \leq x, w \leq y$, then $w \leq x \land y$ One vs. All Variables/Definitions • Lattice for each variable: e.g. intersection \[ \begin{array}{c} 1 \\ \downarrow \\ 0 \\ \end{array} \] • Lattice for three variables: Descending Chain • **Definition** - The **height** of a lattice is the largest number of > relations that will fit in a descending chain. \[ x_0 > x_1 > x_2 > \ldots \] • **Height of values in reaching definitions?** - Height n – number of definitions • **Important property:** **finite descending chain** • **Can an infinite lattice have a finite descending chain?** - yes • **Example: Constant Propagation/Folding** - To determine if a variable is a constant • **Data values** - undef, ... -1, 0, 1, 2, ..., not-a-constant Transfer Functions • Basic Properties \( f: V \rightarrow V \) – Has an identity function • There exists an \( f \) such that \( f(x) = x \), for all \( x \). – Closed under composition • if \( f_1, f_2 \in F \), then \( f_1 \cdot f_2 \in F \) Monotonicity • A framework \((F, V, \land)\) is monotone if and only if • \(x \leq y\) implies \(f(x) \leq f(y)\) • i.e. a “smaller or equal” input to the same function will always give a “smaller or equal” output • Equivalently, a framework \((F, V, \land)\) is monotone if and only if • \(f(x \land y) \leq f(x) \land f(y)\) • i.e. merge input, then apply \(f\) is small than or equal to apply the transfer function individually and then merge the result Example • Reaching definitions: \( f(x) = \text{Gen} \cup (x - \text{Kill}), \land = \lor \) – Definition 1: \( x_1 \leq x_2, \text{Gen} \cup (x_1 - \text{Kill}) \leq \text{Gen} \cup (x_2 - \text{Kill}) \) – Definition 2: \( (\text{Gen} \cup (x_1 - \text{Kill})) \cup (\text{Gen} \cup (x_2 - \text{Kill})) = (\text{Gen} \cup ((x_1 \cup x_2) - \text{Kill})) \) • Note: Monotone framework does not mean that \( f(x) \leq x \) • e.g., reaching definition for two definitions in program • suppose: \( f_x: \text{Gen}_x = \{d_1, d_2\}; \text{Kill}_x = {} \) • If input(second iteration) \( \leq \) input(first iteration) • result(second iteration) \( \leq \) result(first iteration) Distributivity • A framework $(F, V, \wedge)$ is **distributive** if and only if • $f(x \wedge y) = f(x) \wedge f(y)$ • i.e. merge input, then apply $f$ is **equal to** apply the transfer function individually then merge result • Example: Constant Propagation is NOT distributive \[ \begin{align*} a &= 2 \\ b &= 3 \\ \end{align*} \] \[ \begin{align*} a &= 3 \\ b &= 2 \\ \end{align*} \] \[ c = a + b \] Data Flow Analysis • Definition – Let $f_1, ..., f_m : \in F$, where $f_i$ is the transfer function for node $i$ • $f_p = f_{n_k} \cdot ... \cdot f_{n_1}$, where $p$ is a path through nodes $n_1, ..., n_k$ • $f_p = \text{identify function}$, if $p$ is an empty path • Ideal data flow answer: – For each node $n$: $\bigwedge f_{p_i}(T)$, for all possibly executed paths $p_i$ reaching $n$. • But determining all possibly executed paths is undecidable Meet-Over-Paths (MOP) - Error in the conservative direction - **Meet-Over-Paths** (MOP): - For each node $n$: \[ \text{MOP}(n) = \bigwedge f_{p_i}(T), \text{ for all paths } p_i \text{ reaching } n \] - a path exists as long there is an edge in the code - consider more paths than necessary - MOP = Perfect-Solution $\bigwedge$ Solution-to-Unexecuted-Paths - MOP $\leq$ Perfect-Solution - Potentially more constrained, solution is small - hence *conservative* - It is not *safe* to be $>$ Perfect-Solution! - **Desirable solution: as close to MOP as possible** MOP Example Assume: B2 & B3 do not update x Ideal: Considers only 2 paths B1-B2-B4-B6-B7 (i.e., x=1) B1-B3-B4-B5-B7 (i.e., x=0) MOP: Also considers unexecuted paths B1-B2-B4-B5-B7 B1-B3-B4-B6-B7 Solving Data Flow Equations • **Example: Reaching definitions** • $\text{out[entry]} = \{\}$ • Values = \{subsets of definitions\} • Meet operator: $\bigcup$ • $\text{in[b]} = \bigcup \text{out}[p]$, for all predecessors $p$ of $b$ • **Transfer functions**: $\text{out[b]} = \text{gen}_b \bigcup (\text{in[b]} - \text{kill}_b)$ • **Any solution satisfying equations** = **Fixed Point Solution (FP)** • **Iterative algorithm** • initializes $\text{out[b]}$ to $\{\}$ • if converges, then it computes **Maximum Fixed Point (MFP)**: • MFP is the largest of all solutions to equations • **Properties**: • $\text{FP} \leq \text{MFP} \leq \text{MOP} \leq \text{Perfect-solution}$ • FP, MFP are safe • $\text{in}(b) \leq \text{MOP}(b)$ Partial Correctness of Algorithm • If data flow framework is monotone, then if the algorithm converges, $\text{IN}[b] \leq \text{MOP}[b]$ • Proof: Induction on path lengths – Define $\text{IN}[\text{entry}] = \text{OUT}[\text{entry}]$ and transfer function of entry = Identity function – Base case: path of length 0 • Proper initialization of $\text{IN}[\text{entry}]$ – If true for path of length $k$, $p_k = (n_1, \ldots, n_k)$, then true for path of length $k+1$: $p_{k+1} = (n_1, \ldots, n_{k+1})$ • Assume: $\text{IN}[n_k] \leq f_{nk-1}(f_{nk-2}(\ldots f_{n1}(\text{IN}[\text{entry}])))$ • $\text{IN}[n_{k+1}] = \text{OUT}[n_k] \wedge \ldots$ $\leq \text{OUT}[n_k]$ $\leq f_{nk}(\text{IN}[n_k])$ $\leq f_{nk-1}(f_{nk-2}(\ldots f_{n1}(\text{IN}[\text{entry}])))$ Precision • If data flow framework is **distributive**, then if the algorithm converges, $\text{IN}[b] = \text{MOP}[b]$ \[ \begin{array}{c} a = 2 \\ b = 3 \end{array} \quad \begin{array}{c} a = 3 \\ b = 2 \end{array} \] \[c = a + b\] • Monotone but not distributive: behaves as if there are additional paths Additional Property to Guarantee Convergence • Data flow framework (monotone) converges if there is a finite descending chain • For each variable IN[b], OUT[b], consider the sequence of values set to each variable across iterations: – if sequence for in[b] is monotonically decreasing • sequence for out[b] is monotonically decreasing • (out[b] initialized to T) – if sequence for out[b] is monotonically decreasing • sequence of in[b] is monotonically decreasing Speed of Convergence • Speed of convergence depends on order of node visits • Reverse “direction” for backward flow problems Reverse Postorder - **Step 1: depth-first post order** ``` main() { count = 1; Visit(root); } Visit(n) { for each successor s that has not been visited Visit(s); PostOrder(n) = count; count = count+1; } ``` - **Step 2: reverse order** ``` For each node i rPostOrder = NumNodes - PostOrder(i) ``` Depth-First Iterative Algorithm (forward) input: control flow graph CFG = (N, E, Entry, Exit) /* Initialize */ \[ \text{out}[\text{entry}] = \text{init\_value} \] For all nodes \( i \) \[ \text{out}[i] = T \] Change = True /* iterate */ While Change { Change = False For each node \( i \) in rPostOrder { \[ \text{in}[i] = \land(\text{out}[p]), \text{for all predecessors } p \text{ of } i \] oldout = \text{out}[i] \[ \text{out}[i] = f_i(\text{in}[i]) \] if oldout \neq \text{out}[i] Change = True } } Speed of Convergence • If cycles do not add information • information can flow in one pass down a series of nodes of increasing order number: • e.g., 1 -> 4 -> 5 -> 7 -> 2 -> 4 ... • passes determined by number of back edges in the path • essentially the nesting depth of the graph • Number of iterations = number of back edges in any acyclic path + 2 • (2 are necessary even if there are no cycles) • What is the depth? – corresponds to depth of intervals for “reducible” graphs – in real programs: average of 2.75 A Check List for Data Flow Problems - **Semi-lattice** - set of values - meet operator - top, bottom - finite descending chain? - **Transfer functions** - function of each basic block - monotone - distributive? - **Algorithm** - initialization step (entry/exit, other nodes) - visit order: rPostOrder - depth of the graph Conclusions • Dataflow analysis examples – Reaching definitions – Live variables • Dataflow formation definition – Meet operator – Transfer functions – Correctness, Precision, Convergence – Efficiency CSC D70: Compiler Optimization Dataflow Analysis Prof. Gennady Pekhimenko University of Toronto Winter 2019 The content of this lecture is adapted from the lectures of Todd Mowry and Phillip Gibbons
{"Source-Url": "http://www.cs.toronto.edu/~pekhimenko/courses/cscd70-w19/docs/Lecture%202%20[Dataflow]%2001.17.2019.pdf", "len_cl100k_base": 8958, "olmocr-version": "0.1.50", "pdf-total-pages": 62, "total-fallback-pages": 0, "total-input-tokens": 90703, "total-output-tokens": 11672, "length": "2e13", "weborganizer": {"__label__adult": 0.00032830238342285156, "__label__art_design": 0.00037169456481933594, "__label__crime_law": 0.00031566619873046875, "__label__education_jobs": 0.0023059844970703125, "__label__entertainment": 5.948543548583984e-05, "__label__fashion_beauty": 0.00016319751739501953, "__label__finance_business": 0.0002486705780029297, "__label__food_dining": 0.0004000663757324219, "__label__games": 0.0007190704345703125, "__label__hardware": 0.0013914108276367188, "__label__health": 0.0005879402160644531, "__label__history": 0.00023424625396728516, "__label__home_hobbies": 0.0001513957977294922, "__label__industrial": 0.0006198883056640625, "__label__literature": 0.00018274784088134768, "__label__politics": 0.00026106834411621094, "__label__religion": 0.0005521774291992188, "__label__science_tech": 0.021148681640625, "__label__social_life": 0.00010132789611816406, "__label__software": 0.0039215087890625, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.0004165172576904297, "__label__transportation": 0.0007419586181640625, "__label__travel": 0.0002295970916748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25967, 0.03319]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25967, 0.77057]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25967, 0.66468]], "google_gemma-3-12b-it_contains_pii": [[0, 201, false], [201, 274, null], [274, 552, null], [552, 972, null], [972, 1304, null], [1304, 1964, null], [1964, 2319, null], [2319, 2421, null], [2421, 2558, null], [2558, 3023, null], [3023, 3289, null], [3289, 3438, null], [3438, 3825, null], [3825, 4432, null], [4432, 4829, null], [4829, 5251, null], [5251, 5684, null], [5684, 5811, null], [5811, 6208, null], [6208, 6902, null], [6902, 7781, null], [7781, 8289, null], [8289, 8591, null], [8591, 8851, null], [8851, 9080, null], [9080, 9569, null], [9569, 10448, null], [10448, 11011, null], [11011, 11510, null], [11510, 12063, null], [12063, 12312, null], [12312, 12490, null], [12490, 12909, null], [12909, 13686, null], [13686, 14611, null], [14611, 14925, null], [14925, 15037, null], [15037, 15412, null], [15412, 15734, null], [15734, 16197, null], [16197, 16787, null], [16787, 17359, null], [17359, 17533, null], [17533, 18081, null], [18081, 18338, null], [18338, 18805, null], [18805, 19518, null], [19518, 19939, null], [19939, 20408, null], [20408, 21000, null], [21000, 21198, null], [21198, 21958, null], [21958, 22783, null], [22783, 23095, null], [23095, 23585, null], [23585, 23712, null], [23712, 24073, null], [24073, 24667, null], [24667, 25207, null], [25207, 25552, null], [25552, 25767, null], [25767, 25967, null]], "google_gemma-3-12b-it_is_public_document": [[0, 201, true], [201, 274, null], [274, 552, null], [552, 972, null], [972, 1304, null], [1304, 1964, null], [1964, 2319, null], [2319, 2421, null], [2421, 2558, null], [2558, 3023, null], [3023, 3289, null], [3289, 3438, null], [3438, 3825, null], [3825, 4432, null], [4432, 4829, null], [4829, 5251, null], [5251, 5684, null], [5684, 5811, null], [5811, 6208, null], [6208, 6902, null], [6902, 7781, null], [7781, 8289, null], [8289, 8591, null], [8591, 8851, null], [8851, 9080, null], [9080, 9569, null], [9569, 10448, null], [10448, 11011, null], [11011, 11510, null], [11510, 12063, null], [12063, 12312, null], [12312, 12490, null], [12490, 12909, null], [12909, 13686, null], [13686, 14611, null], [14611, 14925, null], [14925, 15037, null], [15037, 15412, null], [15412, 15734, null], [15734, 16197, null], [16197, 16787, null], [16787, 17359, null], [17359, 17533, null], [17533, 18081, null], [18081, 18338, null], [18338, 18805, null], [18805, 19518, null], [19518, 19939, null], [19939, 20408, null], [20408, 21000, null], [21000, 21198, null], [21198, 21958, null], [21958, 22783, null], [22783, 23095, null], [23095, 23585, null], [23585, 23712, null], [23712, 24073, null], [24073, 24667, null], [24667, 25207, null], [25207, 25552, null], [25552, 25767, null], [25767, 25967, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25967, null]], "pdf_page_numbers": [[0, 201, 1], [201, 274, 2], [274, 552, 3], [552, 972, 4], [972, 1304, 5], [1304, 1964, 6], [1964, 2319, 7], [2319, 2421, 8], [2421, 2558, 9], [2558, 3023, 10], [3023, 3289, 11], [3289, 3438, 12], [3438, 3825, 13], [3825, 4432, 14], [4432, 4829, 15], [4829, 5251, 16], [5251, 5684, 17], [5684, 5811, 18], [5811, 6208, 19], [6208, 6902, 20], [6902, 7781, 21], [7781, 8289, 22], [8289, 8591, 23], [8591, 8851, 24], [8851, 9080, 25], [9080, 9569, 26], [9569, 10448, 27], [10448, 11011, 28], [11011, 11510, 29], [11510, 12063, 30], [12063, 12312, 31], [12312, 12490, 32], [12490, 12909, 33], [12909, 13686, 34], [13686, 14611, 35], [14611, 14925, 36], [14925, 15037, 37], [15037, 15412, 38], [15412, 15734, 39], [15734, 16197, 40], [16197, 16787, 41], [16787, 17359, 42], [17359, 17533, 43], [17533, 18081, 44], [18081, 18338, 45], [18338, 18805, 46], [18805, 19518, 47], [19518, 19939, 48], [19939, 20408, 49], [20408, 21000, 50], [21000, 21198, 51], [21198, 21958, 52], [21958, 22783, 53], [22783, 23095, 54], [23095, 23585, 55], [23585, 23712, 56], [23712, 24073, 57], [24073, 24667, 58], [24667, 25207, 59], [25207, 25552, 60], [25552, 25767, 61], [25767, 25967, 62]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25967, 0.03075]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
adf5af6730991805133cefe484bc4fb5ba61cc32
[REMOVED]
{"Source-Url": "http://www.researchgate.net/profile/Marsha_Chechik/publication/2610140_Lightweight_Reasoning_About_Program_Correctness/links/09e41510c2bdeb3a95000000.pdf", "len_cl100k_base": 14514, "olmocr-version": "0.1.49", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 90695, "total-output-tokens": 19778, "length": "2e13", "weborganizer": {"__label__adult": 0.0003361701965332031, "__label__art_design": 0.0003254413604736328, "__label__crime_law": 0.0003345012664794922, "__label__education_jobs": 0.0007205009460449219, "__label__entertainment": 5.984306335449219e-05, "__label__fashion_beauty": 0.00014007091522216797, "__label__finance_business": 0.00020432472229003904, "__label__food_dining": 0.00035834312438964844, "__label__games": 0.0006256103515625, "__label__hardware": 0.00078582763671875, "__label__health": 0.0004978179931640625, "__label__history": 0.00022304058074951172, "__label__home_hobbies": 9.40561294555664e-05, "__label__industrial": 0.0003821849822998047, "__label__literature": 0.000255584716796875, "__label__politics": 0.0002701282501220703, "__label__religion": 0.0004737377166748047, "__label__science_tech": 0.0202178955078125, "__label__social_life": 7.867813110351562e-05, "__label__software": 0.0045013427734375, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.00028443336486816406, "__label__transportation": 0.000553131103515625, "__label__travel": 0.0001876354217529297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64687, 0.05089]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64687, 0.58305]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64687, 0.81699]], "google_gemma-3-12b-it_contains_pii": [[0, 1679, false], [1679, 3793, null], [3793, 6640, null], [6640, 9495, null], [9495, 11755, null], [11755, 13899, null], [13899, 16261, null], [16261, 17926, null], [17926, 19220, null], [19220, 21689, null], [21689, 25014, null], [25014, 28193, null], [28193, 31600, null], [31600, 33268, null], [33268, 34849, null], [34849, 37330, null], [37330, 40087, null], [40087, 40581, null], [40581, 43172, null], [43172, 45193, null], [45193, 47858, null], [47858, 49818, null], [49818, 52486, null], [52486, 54430, null], [54430, 56447, null], [56447, 58545, null], [58545, 60578, null], [60578, 62459, null], [62459, 63604, null], [63604, 64687, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1679, true], [1679, 3793, null], [3793, 6640, null], [6640, 9495, null], [9495, 11755, null], [11755, 13899, null], [13899, 16261, null], [16261, 17926, null], [17926, 19220, null], [19220, 21689, null], [21689, 25014, null], [25014, 28193, null], [28193, 31600, null], [31600, 33268, null], [33268, 34849, null], [34849, 37330, null], [37330, 40087, null], [40087, 40581, null], [40581, 43172, null], [43172, 45193, null], [45193, 47858, null], [47858, 49818, null], [49818, 52486, null], [52486, 54430, null], [54430, 56447, null], [56447, 58545, null], [58545, 60578, null], [60578, 62459, null], [62459, 63604, null], [63604, 64687, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64687, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64687, null]], "pdf_page_numbers": [[0, 1679, 1], [1679, 3793, 2], [3793, 6640, 3], [6640, 9495, 4], [9495, 11755, 5], [11755, 13899, 6], [13899, 16261, 7], [16261, 17926, 8], [17926, 19220, 9], [19220, 21689, 10], [21689, 25014, 11], [25014, 28193, 12], [28193, 31600, 13], [31600, 33268, 14], [33268, 34849, 15], [34849, 37330, 16], [37330, 40087, 17], [40087, 40581, 18], [40581, 43172, 19], [43172, 45193, 20], [45193, 47858, 21], [47858, 49818, 22], [49818, 52486, 23], [52486, 54430, 24], [54430, 56447, 25], [56447, 58545, 26], [58545, 60578, 27], [60578, 62459, 28], [62459, 63604, 29], [63604, 64687, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64687, 0.01624]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
1ec3f91ff0f6bd3dfecb0fdb40211a80d9426f30
Package ‘squash’ Version 1.0.9 Date 2020-02-19 Title Color-Based Plots for Multivariate Visualization Author Aron C. Eklund Maintainer Aron C. Eklund <aroneklund@gmail.com> Imports graphics, grDevices, methods, stats Description Functions for color-based visualization of multivariate data, i.e. colorgrams or heatmaps. Lower-level functions map numeric values to colors, display a matrix as an array of colors, and draw color keys. Higher-level plotting functions generate a bivariate histogram, a dendrogram aligned with a color-coded matrix, a triangular distance matrix, and more. License Artistic-2.0 URL https://github.com/aroneklund/squash NeedsCompilation no Repository CRAN Date/Publication 2020-02-20 07:00:05 UTC R topics documented: cimage .............................................................. 2 cmap ................................................................. 4 colorgram ............................................................ 5 ColorPalettes ....................................................... 7 corrogram ............................................................ 9 dendromat .......................................................... 10 diamond ............................................................. 12 distogram ........................................................... 13 hist2 ................................................................. 14 hkey ................................................................. 15 makecmap ........................................................... 16 matapply ............................................................ 18 prettyInt ............................................................ 19 cimage Description Draw a matrix of colored rectangles, possibly of varying sizes. Usage cimage(x = NULL, y = NULL, zcol = NULL, zsize = 1, xlab = NULL, ylab = NULL, xlabels = NULL, ylabels = NULL, border = NA, add = FALSE, axes = TRUE, useRaster = FALSE, ...) Arguments x Vector of rectangle midpoints or breakpoints along X-axis (corresponding to the columns of zcol). y Vector of rectangle midpoints or breakpoints along Y-axis (corresponding to the rows of zcol). zcol Matrix of colors for each rectangle, e.g. RGB values or integer indices. zsize Relative size for each rectangle, ranging from 0 to 1. Will be recycled if necessary. xlab, ylab Labels for the axes. xlabels, ylabels Categorical labels for rows/columns. border Color for rectangle borders. add Add to the current plot instead of creating a new one? axes Draw axes on the plot? useRaster TRUE = draw a true raster image (using rasterImage). FALSE = draw a series of individual rectangles. ... Further arguments passed to plot. Details Data (x, y, and zcol) can be passed to this function in any format recognized by `xyzmat.coords`. This function is somewhat similar to the function `image`, except that the colors are specified explicitly, and the size of each rectangle can be adjusted. If `xlabels` is `NULL` (the default), standard numeric axes are drawn on the X-axis. If `xlabels` is `TRUE`, the rownames of `zcol` are placed below each column. Otherwise, `xlabels` is taken as a vector of labels to be placed below each column. Likewise for `ylabels` and the Y-axis. Using `useRaster=TRUE` can reduce the file size for large matrices drawn to vector-based graphics output such as PDFs. However, the output may look strange with smaller matrices on graphics devices that do smoothing by default (such as PDF output viewed in Preview). Value None. Note Currently, this function will not behave as expected if the x and/or y values are specified as midpoints and are not evenly spaced. See Also `image` and `rasterImage` provide somewhat similar functionality. This function is called by `colorgram`, which accepts a numeric (rather than color) matrix as input. The package `pixmap` may be more suitable for plotting images that are not data-driven (e.g. external files). Examples ```r ## Visualize nearly all built-in R colors color.mat <- matrix(colors()[1:625], nrow = 25) cimage(zcol = color.mat) ## An example using "zsize" x <- y <- 1:10 zcolor <- matrix( rainbow(100)[outer(x, y)], nrow = 10 ) zsize <- matrix( runif(100), nrow = 10 ) cimage(x, y, zcol = zcolor, zsize = zsize) ## Another simple example red <- green <- 0:255 rg <- outer(red, green, rgb, blue = 1, maxColorValue = 255) cimage(red, green, zcol = rg) ## The same, but using useRaster (resulting in faster image generation, ## and smaller file size if saved as a PDF) cimage(red, green, zcol = rg, useRaster = TRUE) ``` ## An example with categorical axes colormixer <- function(x, y) { r <- (col2rgb(x) + col2rgb(y)) / 2 rgb(as.data.frame(t(r)), maxColorValue = 255) } set.seed(123) x <- sample(colors(), 15) y <- sample(colors(), 10) mix <- outer(x, y, colormixer) op <- par(mar = c(8, 8, 2, 2), las = 2) cimage(zcol = mix, xlabels = x, ylabels = y, xlab = NA, ylab = NA) par(op) ## An example with non-uniform midpoints and breakpoints rg2 <- rg[seq(1, 255, by = 62), seq(1, 255, by = 62)] cimage(x = (1:5)^2, y = c(3, 5, 6, 9, 10, 11), zcol = rg2, zsize = matrix(runif(25, min = 0.5), nrow = 5)) --- cmap ### Apply a color map to numeric data #### Description Map numeric (scalars, vectors, matrices) into colors, (optionally) using a specified color map. #### Usage ```r cmap(x, map, outlier = NULL, ...) ``` #### Arguments - `x` Something numeric (vector, matrix). - `map` The color map to use (as created by `makecmap`). If missing, a color map is created. - `outlier` Color for values outside the map domain, or NULL to generate an error in case of such values (see Details). - `...` Arguments passed to `makecmap`, if map is undefined. #### Details Values in x outside the domain of map cause either an error (if outlier=NULL) or a warning (otherwise). #### Value Something of the same size as x. May be character (RGB) or integer (palettes) depending on the color map used. Dimensions and dimnames are preserved. **See Also** `makecmap`. Also, `as.raster` and `level.colors` have similar functionality. **Examples** ```r x <- y <- 1:50 mat1 <- outer(x, y) ## several ways of visualizing the matrix mat1: plot(col(mat1), row(mat1), col = cmap(mat1), pch = 16) cimage(x, y, zcol = cmap(mat1)) colorgram(x = x, y = y, z = mat1) ## treatment of out-of-domain values map <- makecmap(0:100, colFn = greyscale) x <- y <- -10:10 mat2 <- outer(x, y, "+") ## Not run: ## Values outside the domain of "map" generate an error... plot(col(mat2), row(mat2), col = cmap(mat2, map), pch = 15, cex = 2) ## ... unless we specify "outlier", but this still generates a warning plot(col(mat2), row(mat2), col = cmap(mat2, map, outlier = "red"), pch = 15, cex = 2) ## End(Not run) ``` --- **colorgram** *Draw a colorgram (heatmap) of a matrix* **Description** Plot a visual representation of a numeric matrix using colors to indicate values. **Usage** ```r colorgram(x = NULL, y = NULL, z = NULL, zsize = 1, map, nz = 10, breaks = pretty, symm = FALSE, base = NA, colFn = jet, key = hkey, key.args = list(), xlab = NULL, ylab = NULL, zlab = NULL, outlier = NULL, ...) ``` Arguments - **x, y** Locations of grid lines at which the values in z are measured. These must be finite, non-missing and in (strictly) ascending order. (see Details below) - **z** A numeric matrix containing the values to be visualized as colors (NAs are allowed). Note that x can be used instead of z for convenience. - **zsize** A numeric matrix specifying the relative size of each rectangle. - **map** A list, as generated by `makecmap`. If missing, a color map is generated automatically. - **nz, breaks, symm, base, colFn** Arguments passed to `makecmap`, if map is missing. - **key** A function to draw a color key, such as `hkey` or `vkey`. - **key.args** Arguments passed to the function given by key. - **xlab, ylab** Labels for axes. - **zlab** Label (title) for the color key. - **outlier** Color for values outside the map domain. If NULL, values falling outside the map domain will generate an error. - **...** Further arguments passed to `cimage`. Details This function assigns colors to the elements of a matrix and plots it using `cimage`. Data can be passed to this function in any format recognized by `xyzmat.coords`. colorgram is somewhat similar to `image`. However, colorgram adds the following functionality: 1. The value-to-color mapping can be specified (thus allowing unequal bin sizes). 2. A color key can be added, optionally. 3. A color can be specified for missing values. 4. The size of each grid rectangle can be adjusted to convey additional information. Two color key functions are provided in the squash package: 1) `hkey` draws a horizontal key, in the lower-left corner by default. 2) `vkey` draws a vertical key, in the lower-right corner by default. The latter usually looks better if the right-hand margin is increased. These keys can be controlled somewhat using `key.args`. However, that `title` and map cannot be specified in `key.args`; use the `zlab` and map arguments instead. Value Invisibly, map. See Also If this is not quite what you are looking for, consider `image`, `filled.contour`, or `levelplot`. Also `color2D.matplot` in the `plotrix` package. Examples ```r ## median Petal.Length as function of Sepal.Length and Sepal.Width pl <- matapply( iris[,1:3], FUN = median, nx = 20, ny = 15 ) ## Draw a colorgram with the default horizontal color key colorgram(pl, main = 'iris') ## ... or with the vertical color key colorgram(pl, main = 'iris', key = vkey) ## ... add margin space to improve legibility op <- par(mar = c(5,4,4,4)+0.1) colorgram(pl, main = 'iris', key = vkey, key.args = list(skip = 2), zlab = 'Petal\nlength') par(op) ## Here is the example from the base function "persp" x <- seq(-10, 10, length= 30) y <- x f <- function(x,y) { r <- sqrt(x^2+y^2); 10 * sin(r)/(r) } z <- outer(x, y, f) colorgram(x, y, z) ## ... and with a slight fix to the key: colorgram(x, y, z, key.args = list(wh = c(1, 4, 14))) ## We could also make more space for the key: op <- par(mar = c(7,4,4,2)+0.1) colorgram(x, y, z, key.args = list(stretch = 3)) par(op) ## Here are some alternatives to colorgram persp(x, y, z, theta = 30, phi = 30, expand = 0.5, col = "lightblue") image(x, y, z) contour(x, y, z) ## Use 'xlabels' and 'ylabels' to create categorical axes colorgram(t(mtcars[,c(2,8:11)]), colFn = heat, xlabels = TRUE, ylabels = TRUE, xlab = NA, ylab = NA, zlab = 'Value', main = 'Motor car specifications', las = 1) ``` --- **ColorPalettes** **Bonus color palettes** **Description** Generate a vector of contiguous colors of a specified length. Usage rainbow2(n) jet(n) heat(n) coolheat(n) blueorange(n) bluered(n) darkbluered(n) greyscale(n, start = 0.9, end = 0) grayscale(n, start = 0.9, end = 0) Arguments n Number of colors to return. start, end Levels of gray (1 = white, 0 = black). Details rainbow2 is a variation of rainbow, in which the colors do not cycle completely around. Thus, rainbow2 may be less ambiguous as a color scale. jet is similar to the Matlab color scheme of the same name and is taken from an example in colorRamp. heat is similar to heat.colors, but starts at black rather than red. coolheat is the diverging version of heat, running from cyan to black to yellow. blueorange and bluered range from blue to grey to orange (or red), and are intended to be used as diverging color scales. darkbluered ranges from dark blue to grey to dark red, and is intended to be used as a diverging color scale that emphasizes the magnitude more than the sign. greyscale or grayscale ranges from off-white to black. Value A vector of RGB colors. See Also Standard R palettes such as rainbow. Custom palettes can be generated with colorRamp. Examples ## Present the squash palettes along with the built-in R palettes squash.palettes <- c('rainbow2', 'jet', 'grayscale', 'heat', 'coolheat', 'blueorange', 'bluered', 'darkbluered') R.palettes <- c('rainbow', 'heat.colors', 'terrain.colors', 'topo.colors', 'cm.colors') corrogram Draw a color-coded triangular matrix of pairwise correlations description This figure is a color-coded, rotated triangular matrix indicating the correlation between every pair of items. usage corrogram(...) arguments ... Arguments passed to distogram. details This is a simple wrapper around distogram, with the color scale set by default to use blueorange with a range from -1 to +1. value A color map (as generated by makecmap), invisibly. see also distogram examples corrogram(cor(swiss), title = 'Pearson correlation') dendromat Plot a dendrogram with a colorgram underneath Description Plot a dendrogram with a colorgram underneath. The colorgram typically indicates characteristics about each element in the dendrogram. Usage dendromat(x, mat, labRow = rownames(mat), labCol = colnames(mat), height = NA, gap = 0, matlabside = 2, border = NA, cex.lab = par("cex.axis"), ...) Arguments x An object of type hclust or dendrogram. mat A matrix or data frame of colors, with each row corresponding to an item in the dendrogram. labRow Labels of items, to be placed underneath the matrix. labCol Labels for characteristics, to be placed next to the matrix. height Fraction of the plot area to reserve for the color matrix. If NA, the spacing is set automatically. gap Extra space (in lines) to add between the dendrogram and the matrix. matlabside Which side of the matrix to put labCol (2 or 4). border Border color for the color matrix. cex.lab Relative text size for the item labels. ... Further arguments passed to plot.dendrogram. Details The order of labRow and the rows of mat should correspond to the input to hclust (or whatever function created x). This function reorders mat and labRow to match the dendrogram, using order.dendrogram. This function combines two plots using layout; therefore it is incompatible with other multiple-plot schemes (e.g. par(mfrow)). If height == NA (the default), the function tries to leave enough room for the item labels at the bottom, and enough room for the color matrix in the middle. The leftover plotting area on the top is used for the dendrogram. The lower margin setting (see par) is ignored. If labRow is set to NULL, or is equal to NULL because mat lacks rownames, then the item labels are taken from x instead. Value none. Note Currently, horizontal dendrograms are not supported. After dendromat is finished, the user coordinates are set to c(0,1,0,1). See Also heatmap Examples ```r ## Motor Trend car road test data mt.dend <- hclust(dist(mtcars[,1:7])) mt.mat <- mtcars[,8:11] ## A minimal dendromat dendromat(mt.dend, mt.mat) ## The same plot, but with a few enhancements names(mt.mat) <- c('Straight', 'Manual', '# gears', '# carbs') dendromat(mt.dend, mt.mat, gap = 0.5, border = 'gray', las = 2, ylab = 'Euclidean distance', main = 'mtcars, clustered by performance') legend('topright', legend = 0:8, fill = 0:8) ## US state data, with color keys us.dend <- hclust(dist(scale(state.x77))) income <- state.x77[, 'Income'] frost <- state.x77[, 'Frost'] murder <- state.x77[, 'Murder'] income.cmap <- makecmap(income, n = 5, colFn = colorRampPalette(c('black', 'green'))) frost.cmap <- makecmap(frost, n = 5, colFn = colorRampPalette(c('black', 'blue'))) murder.cmap <- makecmap(murder, n = 5, colFn = colorRampPalette(c('black', 'red'))) us.mat <- data.frame(Frost = cmap(frost, frost.cmap), Murder = cmap(murder, murder.cmap), Income = cmap(income, income.cmap)) par(mar = c(5,4,4,3)+0.1) dendromat(us.dend, us.mat, ylab = 'Distance', main = 'US states') vkey(frost.cmap, 'Frost') vkey(murder.cmap, 'Murder', y = 0.3) ``` diamond Description Draw diamonds on the graphics device. Usage diamond(x, y = NULL, radius, ...) Arguments x, y Position(s) of the centers of the diamonds. radius Distances from the center to the vertex. ... Further arguments passed to polygon (e.g. col, border). Details x and y can be passed to diamond in any form recognized by xy.coords (e.g. individual vectors, list, data frame, formula). Only “square” (equilateral) diamonds are implemented here. See Also rect Examples plot(1:10) diamond(1:10, rep(3, 10), radius = 0.4) diamond(3, 8, 1, border = 3) diamond(1:10, rep(5, 10), radius = seq(0.1, 1, length = 10), col = 1:10) distogram **Draw a color-coded triangular distance matrix** **Description** This function draws a color-coded, rotated triangular matrix indicating the "distance" between every pair of items. **Usage** ``` distogram(x, map, n = 10, base = NA, colFn = heat, key = TRUE, title = NA, ...) ``` **Arguments** - `x` A `dist` object, or a square numeric matrix. - `map` A color map, as generated by `makecmap` (optional). - `n, base, colFn` Arguments passed to `makecmap`, if map is omitted. - `key` Add a color key? - `title` Title for the color key. - `...` Further arguments passed to `trianglegram`, (e.g. labels). **Details** If the input `x` is a matrix, the lower triangle is extracted by default (but see the arguments for `trianglegram`). **Value** The color map, invisibly. **See Also** corrogram **Examples** ```r ## Distances between European cities distogram(eurodist, title = 'Distance (km)') ## Some variations map <- distogram(eurodist, key = FALSE, colFn = jet, right = TRUE) vkey(map, title = 'Distance (km)', x = -8) ``` Description Calculate data for a bivariate histogram and (optionally) plot it as a colorgram. Usage ```r hist2(x, y = NULL, nx = 50, ny = nx, xlim = NULL, ylim = NULL, xbreaks = NULL, ybreaks = NULL, plot = TRUE, xlab = NULL, ylab = NULL, zlab = "Counts", colFn = heat, breaks = prettyInt, ...) ``` Arguments - **x, y** Numeric vectors. - **nx, ny** Approximate number of intervals along x and y axes. - **xlim, ylim** Limit the range of data points considered. - **xbreaks, ybreaks** Breakpoints between bins along x and y axes. - **plot** Plot the histogram? - **xlab, ylab** Axis labels. - **zlab** Label for the color key. - **colFn, breaks** Color key parameters; see `makecmap`. - **...** Further arguments passed to `colorgram`. Details Data can be passed to `hist2` in any form recognized by `xy.coords` (e.g. individual vectors, list, data frame, formula). Value Invisibly, a list with components: - **x** Vector of breakpoints along the x-axis. - **y** Vector of breakpoints along the y-axis. - **z** Matrix of counts. - **xlab** A label for the x-axis. - **ylab** A label for the y-axis. - **zlab** A label for the color key. hkey See Also hist, for a standard (univariate) histogram. hist2d in the gplots package for another implementation. The hexbin package, for a hexagonal implementation. Examples ```r set.seed(123) x <- rnorm(10000) y <- rnorm(10000) + x hist2(x, y) ## pseudo-log-scale color breaks: hist2(x, y, breaks = prettyLog, key.args = list(stretch = 4)) ## log-scale color breaks; the old way using 'base' ## (notice box removal to make space for the vertical color key) hist2(x, y, base = 2, key = vkey, nz = 5, bty = 'l') ``` hkey Add a color key to a plot Description Add a horizontal or vertical color key to a plot Usage ```r hkey(map, title = NA, side = 1, stretch = 1.4, x, y, skip, wh) vkey(map, title = NA, side = 2, stretch = 1.4, x, y, skip, wh) ``` Arguments - `map`: A list, as generated by `makecmap`. - `title`: Title for the key. - `side`: Where to place the labels. (1 or 3 for hkey, 2 or 4 for vkey) - `stretch`: Aspect ratio of the color rectangles. - `x, y`: Position of lower left corner of the color rectangles. If missing, the key will be placed automatically in the lower-left (hkey) or lower-right (vkey) corner of the figure region. - `skip`: Omit every skip labels (optional). - `wh`: Integer indices indicating which labels to include (optional). makecmap Generate a color map from numeric values to colors Description Generate a color map from numeric values to a contiguous set of colors. Usage makecmap(x, n = 10, breaks = pretty, symm = FALSE, base = NA, colFn = jet, col.na = NA, right = FALSE, include.lowest = FALSE, ...) Arguments x A vector of numbers (only the finite range is used). n Approximate number of color levels desired. breaks A function to generate breakpoints, or the breakpoints themselves. symm Extend the mapping domain to be symmetric around zero? base Base for log scale, or NA to use a linear scale. colFn A function that generates contiguous colors. details This function tries to label as many breakpoints as possible, but if the labels would overlap a subset of labels is chosen automatically. If this doesn’t look right, the subset of labels can be specified with either skip or wh. Clipping is turned off, so the key can be placed anywhere in the figure region, including the margins. Examples attach(iris) map <- makecmap(Petal.Length) pl.color <- cmap(Petal.Length, map = map) plot(Sepal.Length, Sepal.Width, col = pl.color, pch = 16) hkey(map, title = 'Petal length (hkey default)') hkey(map, title = 'Another hkey', x = 3.8, y = 4.7, stretch = 3) ## looks bad with default margins vkey(map, title = 'vkey default') vkey(map, title = 'Small vkey', x = 7.8, y = 4, stretch = 0.3) ### makecmap - **col.na**: Color to use for missing values. - **right**: Logical; if TRUE, the intervals will be closed on the right (and open on the left). - **include.lowest**: Logical, indicating if an \( x[i] \) equal to the lowest (or highest, for right = FALSE) breaks value should be included. - ... Further arguments to breaks. ### Details The general point of this function is to automatically generate a mapping that can be used in combination with `cmap` to represent numeric data with colors in a consistent way. - **colFn**: Should be a function that returns a vector of colors of specified length, such as `rainbow`, `greyScale`. Custom functions of this type can be generated with `colorRampPalette`. The breakpoints can be specified explicitly by setting `breaks` to a vector of numbers, in which case `x` is ignored. Otherwise, the breakpoints are chosen to be nice, relatively round values (using `pretty`, or another function passed to `breaks`) covering the finite range of `x`. - **symm**: If TRUE, the map domain is extended such that it is symmetric around zero. This can be useful when using divergent color palettes to ensure that the zero point is a neutral color. - **base**: If specified, the breakpoints are generated using log-transformed data. However, setting `breaks = prettyLog` might be preferable. ### Value A list with the following components: - **breaks**: Breakpoints (numeric vector). - **colors**: Colors (character or numeric vector). - **base**: (as supplied in arguments) - **col.na**: (as supplied in arguments) - **right**: (as supplied in arguments) - **include.lowest**: (as supplied in arguments) ### See Also - `cmap` and `colorgram` use the mappings generated by this function. - `hkey` plots a color key. Consider setting `breaks = prettyInt` or `breaks = prettyLog`. ### Examples ```r attach(iris) map1 <- makecmap(Petal.Length) myColors <- cmap(Petal.Length, map = map1) plot(Sepal.Length, Sepal.Width, col = myColors, pch = 16) hkey(map1, title = 'Petal.Length')``` ### Compare the 'breaks' element in the following: ```r x <- rnorm(100) * 1000 str(makecmap(x)) str(makecmap(x, breaks = c(-Inf, -1000, 0, 1000, Inf))) str(makecmap(x, breaks = prettyLog)) ``` --- **matapply** Apply a function over z coordinates, binned by their x, y coordinates --- **Description** Divide the range of x and y into intervals, thus forming a matrix of bins, and apply an arbitrary function to the z values corresponding to each bin. **Usage** ```r matapply(x, y = NULL, z = NULL, FUN, nx = 50, ny = nx, xlim = NULL, ylim = NULL, xbreaks = NULL, ybreaks = NULL, right = FALSE, include.lowest = TRUE, ...) ``` **Arguments** - `x, y, z` Numeric vectors, or possibly a matrix. - `FUN` Function to summarize z values. - `nx, ny` Approximate number of bins along x and y axis. - `xlim, ylim` Limit the range of data points considered. - `xbreaks, ybreaks` Breakpoints between bins along x and y axes. - `right` Logical; if TRUE, the intervals will be closed on the right (and open on the left). - `include.lowest` Logical, indicating if an x[i] equal to the lowest (or highest, for right = FALSE) breaks value should be included. - `...` Further arguments to `FUN`. **Details** x, y and z values can be passed to squash in any form recognized by `xyz.coords` (e.g. individual vectors, list, data frame, formula). Alternatively, data that is already in a matrix can be passed in any format recognized by `xyzmat.coords`. `FUN` should accept a numeric vector and return a single numeric value (e.g. `mean`, `median`, `min`, `max`, `sd`). If `xbreaks` is not specified, approximately `nx` breakpoints will be generated automatically to span the data; likewise for `ybreaks` and `ny`. The output can be visualized with `colorgram`, `image`, etc. Pretty breakpoints ### Description Compute a sequence of around n values covering the range of x. These functions are variations of the standard R function `pretty`. ### Usage ```r prettyInt(x, n = 5, ...) prettyLog(x, n = 5, small = NA, logrange = c(-100, 100)) ``` Arguments - **x**: Numeric vector. - **n**: Approximate number of values to return. - **small**: Value below which distinction from zero is unimportant. - **logrange**: Log (base 10) of the range of values to consider as possible breakpoints. - **...**: Further arguments passed to `pretty`. Details `prettyInt` returns integer values, even if this forces the number of values returned to be much lower than the requested number `n`. However, at least two values will be returned. `prettyLog` returns values that are approximately evenly spaced on a log scale, such as (1, 3, 10, 30, ...) or (1, 2, 5, 10, 20, 50, ...) or (1, 10, 100, ...). Negative or zero values in `x` are accommodated by series such as (-100, -10, -1, 0, 1, 10, 100, ...). Setting the parameter `small` to a non-NA value will ignore `x` with absolute values below `small`. Value A numeric vector. See Also - `pretty` Examples ```r ## x1 <- 1:3 pretty(x1) prettyInt(x1) prettyLog(x1) ## x2 <- pi ^ (1:8) range(x2) pretty(x2) prettyLog(x2) prettyLog(x2, n = 10) ## x3 <- c(-x2, x2) pretty(x3) prettyLog(x3) prettyLog(x3, small = 100) ``` savemat Save a matrix as a raster image file Description Save a matrix as a PNG, TIFF, BMP, JPEG, or PDF image file, such that each pixel corresponds to exactly one element of the matrix. Usage savemat(x, filename, map = NULL, outlier = NULL, dev = c('png', 'pdf', 'bmp', 'tiff', 'jpeg'), do.dev.off = TRUE, ...) Arguments x A matrix filename Filename map (Optional) a list, as generated by makecmap. outlier (Optional) A color for outliers, if map is specified. dev Which graphics device to use. ... Further arguments passed to the graphics device; see png or pdf. do.dev.off Close graphics device when finished? Details This function is a relatively simple wrapper around the usual graphics device with the same name as dev. The idea is to provide an easy way of creating an image file from a matrix, without axes, plotting frame, labels, etc. For all choices of dev except "pdf", the output image dimensions are set to match the matrix size, such that each pixel corresponds to an element of the matrix. If map is NULL (the default), the matrix is interpreted as a matrix of colors. If map is specified, it is used to translate the numeric matrix x into a matrix of colors, using cmap. Value None. See Also cimage for drawing a matrix on the screen. Examples ```r ## Not run: big.color.matrix <- matrix(rep(colors()[1:625], 16), nrow = 100) ## save as a PNG savemat(big.color.matrix, file = 'test.png') ## End(Not run) ``` **squashgram** *Visualize a function of z coordinates, binned by x, y coordinates* Description This is a convenience function combining `matapply` and `colorgram`. 3-dimensional data is summarized in 2-dimensional bins and represented as a color matrix. Optionally, the number of observations in each bin is indicated by relative size of the matrix elements. Usage ```r squashgram(x, y = NULL, z = NULL, FUN, nx = 50, ny = nx, xlim = NULL, ylim = NULL, xbreaks = NULL, ybreaks = NULL, xlab = NULL, ylab = NULL, zlab = NULL, shrink = 0, ...) ``` Arguments - `x`, `y`, `z` Numeric vectors; see Details. - `FUN` Function to summarize z values. - `nx`, `ny` Approximate number of bins along x and y axis. - `xlim`, `ylim` Limit the range of data points considered. - `xbreaks`, `ybreaks` Breakpoints between bins along x and y axes. - `xlab`, `ylab` Axis labels. - `zlab` Label for color key. - `shrink` Rectangle shrinkage cutoff. - `...` Further arguments passed to `colorgram`. Details This function may be useful for visualizing the dependence of a variable \( z \) on two other variables \( x \) and \( y \). \( x \), \( y \) and \( z \) values can be passed to `squash` in any form recognized by `xyz.coords` (e.g. individual vectors, list, data frame, formula). This function calls `matapply` and plots the result along with a color key. If non-zero, the `shrink` parameter reduces the size of rectangles for the bins in which the number of samples is smaller than `shrink`. This may be useful to reduce the visual impact of less reliable observations. Value None. See Also The lower-level functions `matapply` and `colorgram`. Examples ```r ## earthquake depths in Fiji attach(quakes) squashgram(depth ~ long + lat, FUN = mean) ## iris measurements attach(iris) squashgram(Sepal.Length, Sepal.Width, Petal.Length, FUN = median, nx = 20, ny = 15) ## Here indicate sample size by size of rectangles squashgram(iris[,1:3], FUN = median, nx = 20, ny = 15, shrink = 5) ## What is the trend in my noisy 3-dimensional data? set.seed(123) x <- rnorm(10000) y <- rnorm(10000) z <- rnorm(10000) + cos(x) + abs(y / 4) squashgram(x, y, z, median, colFn = bluered, shrink = 5) ``` trianglegram Draw a color-coded triangular matrix Description This function is called by `distogram`, and probably isn’t very useful by itself. Usage trianglegram(x, labels = rownames(x), lower = TRUE, diag = FALSE, right = FALSE, add = FALSE, xpos = 0, ypos = 0, xlim, ylim, ...) Arguments x A square matrix containing color values. labels Labels. lower If TRUE, use lower.tri, else use upper.tri. diag Include the diagonal elements of x? right Should triangle point to the right or left? add Add to an existing plot? xpos, ypos Location of bottom point of the triangle. xlim, ylim Plotting limits. ... Further arguments passed to plot. Details The input must be a (square) matrix; however, only part of the matrix (the upper or lower triangle) is displayed. Value none. See Also distogram, corrogram Examples m <- matrix(jet(40), nrow = 20, ncol = 20) trianglegram(m) ## just for fun trianglegram(m, labels = NA, right = TRUE, add = TRUE, xpos = 1) xyzmat.coords Extract (x, y, z) coordinates, where z is a matrix Description Extract (x, y, z) plotting coordinates, where z is a matrix. Usage xyzmat.coords(x = NULL, y = NULL, z = NULL, xlab = NULL, ylab = NULL, zlab = NULL, xds = NULL, yds = NULL, zds = NULL) Arguments x, y Numeric vectors. z A matrix xlab, ylab, zlab Labels xds, yds, zds Results from deparse(substitute(x)) (etc.); see below. Details This function is similar to xyz.coords, except that this function accepts a matrix for z. If x is the same length as nrow(z), x will be taken as the points at which the z values were sampled. If x is the length of nrow(z) + 1, x is taken as the breakpoints between bins. If x is missing, the matrix indices (1:nrow(z)) will be used. Similarly for y and the columns of z. For convenience, the matrix can supplied as the x argument. Or, x can be a list with elements including {x, y, z, xlab, ylab, zlab}. When this function is used inside a higher-level plotting function, the arguments xds, yds, and zds should be set to deparse(substitute(x)) (etc.) so that the function can generate informative default axis labels. For example, see the code for colorgram. Value A list with the following components: x X coordinates y Y coordinates z Z matrix xlab Label for X axis ylab Label for Y axis zlab Label for Z axis Examples ```r ## str(volcano) volcano.xyzmat <- xyzmat.coords(volcano) str(volcano.xyzmat) ``` --- **xyzmat2xyz** *Convert (x, y, zmat) coordinates to (x, y, z) coordinates* **Description** Convert a matrix of Z coordinates into (x, y, z) triples. **Usage** ```r xyzmat2xyz(...) ``` **Arguments** ```r ... ``` Arguments passed to `xyzmat.coords` **Details** The input is based on `xyzmat.coords`. The output is as returned by `xyz.coords`. **Value** A list; see `xyz.coords`. **Examples** ```r ## str(volcano) volcano.xyz <- xyzmat2xyz(volcano) str(volcano.xyz) ``` Index * **aplot** - diamond, 12 - hkey, 15 - trianglegram, 23 * **color** - cmap, 4 - ColorPalettes, 7 - hkey, 15 - makecmap, 16 * **dplot** - prettyInt, 19 * **hplot** - cimage, 2 - colorgram, 5 - corrogram, 9 - dendromat, 10 - distrogram, 13 - hist2, 14 - squashgram, 22 * **manip** - xyzmat.coords, 25 - xyzmat2xyz, 26 * **misc** - matapply, 18 - savemat, 21 as.raster, 5 blueorange, 9 blueorange (ColorPalettes), 7 bluered (ColorPalettes), 7 cimage, 2, 6, 21 cmap, 4, 17, 21 colorgram, 3, 5, 14, 17, 18, 22, 23, 25 ColorPalettes, 7 colorRamp, 8 colorRampPalette, 17 coolheat (ColorPalettes), 7 corrogram, 9, 13, 24 cut, 19 darkbluered (ColorPalettes), 7 dendrogram, 10 dendromat, 10 diamond, 12 dist, 13 distrogram, 9, 13, 23, 24 filled.contour, 6 grayscale (ColorPalettes), 7 greyscale, 17 greyscale (ColorPalettes), 7 hclust, 10 heat (ColorPalettes), 7 heat.colors, 8 heatmap, 11 hist, 15 hist2, 14 hkey, 6, 15, 17 image, 3, 6, 18 jet (ColorPalettes), 7 layout, 10 level.colors, 5 levelplot, 6 lower.tri, 24 makecmap, 4–6, 9, 13–15, 16, 21 matapply, 18, 23 order.dendrogram, 10 par, 10 pdf, 21 plot, 2, 24 plot.dendrogram, 10 27 INDEX png, 21 polygon, 12 pretty, 17, 19, 20 prettyInt, 17, 19 prettyLog, 17 prettyLog (prettyInt), 19 rainbow, 8, 17 rainbow2 (ColorPalettes), 7 rasterImage, 2, 3 rect, 12 savemat, 21 squashgram, 19, 22 tapply, 19 trianglegram, 13, 23 upper.tri, 24 vkey, 6 vkey (hkey), 15 xycoords, 12, 14 xyz.coords, 18, 23, 25, 26 xyzmat.coords, 3, 6, 18, 25, 26 xyzmat2xyz, 26
{"Source-Url": "https://cran.ms.unimelb.edu.au/web/packages/squash/squash.pdf", "len_cl100k_base": 9973, "olmocr-version": "0.1.49", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 61658, "total-output-tokens": 11957, "length": "2e13", "weborganizer": {"__label__adult": 0.0003986358642578125, "__label__art_design": 0.0053558349609375, "__label__crime_law": 0.0003223419189453125, "__label__education_jobs": 0.0010347366333007812, "__label__entertainment": 0.00035119056701660156, "__label__fashion_beauty": 0.0001894235610961914, "__label__finance_business": 0.0003726482391357422, "__label__food_dining": 0.00037741661071777344, "__label__games": 0.0013723373413085938, "__label__hardware": 0.0021343231201171875, "__label__health": 0.00037288665771484375, "__label__history": 0.0005245208740234375, "__label__home_hobbies": 0.0002357959747314453, "__label__industrial": 0.0007433891296386719, "__label__literature": 0.00033092498779296875, "__label__politics": 0.0002658367156982422, "__label__religion": 0.0006356239318847656, "__label__science_tech": 0.11517333984375, "__label__social_life": 0.00016438961029052734, "__label__software": 0.0936279296875, "__label__software_dev": 0.77490234375, "__label__sports_fitness": 0.0003063678741455078, "__label__transportation": 0.00041031837463378906, "__label__travel": 0.0002808570861816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35152, 0.03288]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35152, 0.94208]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35152, 0.66785]], "google_gemma-3-12b-it_contains_pii": [[0, 1745, false], [1745, 2755, null], [2755, 4639, null], [4639, 6070, null], [6070, 7229, null], [7229, 9354, null], [9354, 10774, null], [10774, 12201, null], [12201, 12749, null], [12749, 14565, null], [14565, 15965, null], [15965, 16612, null], [16612, 17668, null], [17668, 18851, null], [18851, 20132, null], [20132, 21537, null], [21537, 23574, null], [23574, 25352, null], [25352, 25625, null], [25625, 26744, null], [26744, 28013, null], [28013, 29210, null], [29210, 30588, null], [30588, 31524, null], [31524, 32926, null], [32926, 33510, null], [33510, 34787, null], [34787, 35152, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1745, true], [1745, 2755, null], [2755, 4639, null], [4639, 6070, null], [6070, 7229, null], [7229, 9354, null], [9354, 10774, null], [10774, 12201, null], [12201, 12749, null], [12749, 14565, null], [14565, 15965, null], [15965, 16612, null], [16612, 17668, null], [17668, 18851, null], [18851, 20132, null], [20132, 21537, null], [21537, 23574, null], [23574, 25352, null], [25352, 25625, null], [25625, 26744, null], [26744, 28013, null], [28013, 29210, null], [29210, 30588, null], [30588, 31524, null], [31524, 32926, null], [32926, 33510, null], [33510, 34787, null], [34787, 35152, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35152, null]], "pdf_page_numbers": [[0, 1745, 1], [1745, 2755, 2], [2755, 4639, 3], [4639, 6070, 4], [6070, 7229, 5], [7229, 9354, 6], [9354, 10774, 7], [10774, 12201, 8], [12201, 12749, 9], [12749, 14565, 10], [14565, 15965, 11], [15965, 16612, 12], [16612, 17668, 13], [17668, 18851, 14], [18851, 20132, 15], [20132, 21537, 16], [21537, 23574, 17], [23574, 25352, 18], [25352, 25625, 19], [25625, 26744, 20], [26744, 28013, 21], [28013, 29210, 22], [29210, 30588, 23], [30588, 31524, 24], [31524, 32926, 25], [32926, 33510, 26], [33510, 34787, 27], [34787, 35152, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35152, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c6aa2e55ed07a5794889c5f9c95b5b2dbf60ac71
Sockets and Beyond: Assessing the Source Code of Network Applications Miika Komu *Aalto University, Department of Computer Science and Engineering* mika@iki.fi Samu Varjonen, Andrei Gurtov, Sasu Tarkoma *University of Helsinki and Helsinki Institute for Information Technology* firstname.lastname@hiit.fi Abstract Network applications are typically developed with frameworks that hide the details of low-level networking. The motivation is to allow developers to focus on application-specific logic rather than low-level mechanics of networking, such as name resolution, reliability, asynchronous processing and quality of service. In this article, we characterize statistically how open-source applications use the Sockets API and identify a number of requirements for network applications based on our analysis. The analysis considers five fundamental questions: naming with end-host identifiers, name resolution, multiple end-host identifiers, multiple transport protocols and security. We discuss the significance of these findings for network application frameworks and their development. As two of our key contributions, we present generic solutions for a problem with OpenSSL initialization in C-based applications and a multihoming issue with UDP in all of the analyzed four frameworks. 1 Introduction The Sockets API is the basis for all internet applications. While the number of applications using it directly is large, some applications use it indirectly through intermediate libraries or frameworks to hide the intricacies of the low-level Sockets API. Nevertheless, the intermediaries still have to interface with the Sockets API. Thus, the Sockets API is important for all network applications either directly or indirectly but has been studied little. To fill in this gap, we have statistically analyzed the usage of Sockets API to characterize how contemporary network applications behave in Ubuntu Linux. In addition to merely characterizing the trends, we have also investigated certain programming pitfalls pertaining the Sockets API. As a result, we report ten main findings and how they impact a number of relatively new sockets API extensions. To mention few examples, the poor adoption of a new DNS look up function slows down the migration path for the extensions dependent on it, such as the APIs for IPv6 source address selection and HIP. OpenSSL library is initialized incorrectly in many applications, causing potential security vulnerabilities. The management of the dual use of TCP/UDP transports and the dual use of the two IP address families creates redundant complexity in applications. To escape the unnecessary complexity of the Sockets API, some applications utilize network application frameworks. However, the frameworks are themselves based on the Sockets API and, therefore, subject to the same scrutiny as applications using the Sockets API. For this reason, it is natural to extend the analysis for frameworks. We chose four example frameworks based on the Sockets API and analyzed them manually in the light of the Sockets API findings. Since frameworks can offer high-level abstractions that do not have to mimic the Sockets API layout, we organized the analysis of the frameworks in a top-down fashion and along generalized dimensions of end-host naming, multiplicity of names and transports, name look up and security. As a highlight of the framework analysis, we discovered a persistent problem with multiplicity of names in all of the four frameworks. To be more precise, the problem was related to multihoming with UDP. In this article, we describe how to solve some of the dis- covered issues in applications and frameworks using the Sockets API. We also characterize some of the inherent limitations of the Sockets API, for instance, related to complexity. 2 Background In this section, we first introduce the parts of the Berkeley Sockets and the POSIX APIs that are required to understand the results described in this article. Then, we briefly introduce four network application frameworks built on top of the two APIs. 2.1 The Sockets API The Sockets API is the de-facto API for network programming due to its availability for various operating systems and languages. As the API is rather low level and does not support object-oriented languages well, many networking libraries and frameworks offer additional higher-level abstractions to hide the details of the Sockets API. Unix-based systems typically provide an abstraction of all network, storage and other devices to the applications. The abstraction is realized with descriptors which are also sometimes called handles. The descriptors are either file or socket descriptors. Both of them have different, specialized accessor functions even though socket descriptors can be operated with some of the file-oriented functions. When a socket descriptor is created with the socket() function, the transport protocol has to be fixed for the socket. In practice, SOCK_STREAM constant fixes the transport protocol to TCP and SOCK_DGRAM constant to UDP. For IPv4-based communications, an application uses a constant called AF_INET, or its alias PF_INET, to create an IPv4-based socket. For IPv6, the application uses correspondingly AF_INET6 or PF_INET6. 2.1.1 Name Resolution An application can look up names from DNS by calling gethostbyname() or gethostbyaddr() functions. The former looks up the host information from the DNS by its symbolic name (forward look up) and the latter by its numeric name, i.e., IP address (reverse look up). While both of these functions support IPv6, they are obsolete and their modern replacements are the getnameinfo() and getaddrlen() functions. 2.1.2 Delivery of Application Data A client-side application can start sending data immediately after creation of the socket; however, the application typically calls the connect() function to associate the socket with a certain destination address and port. The connect() call also triggers the TCP handshake for sockets of SOCK_STREAM type. Then, the networking stack automatically associates a source address and port with the socket if the application did not choose them explicitly with the bind() function. Finally, a close() call terminates the socket gracefully and, when the type of the socket is SOCK_STREAM, the call also initiates the shutdown procedure for TCP. Before a server-oriented application can receive incoming datagrams, it has to call a few functions. Minimally with UDP, the application has to define the port number and IP address to listen to by using bind(). Typically, TCP-based services supporting multiple simultaneous clients prepare the socket with a call to the listen() function for the following accept() call. By default, the accept() call blocks the application until a TCP connection arrives. The function then “peels off” a new socket descriptor from existing one that separates the particular connection with the client from others. A constant INADDR_ANY is used with bind() to listen for incoming datagrams on all network interfaces and addresses of the local host. This wildcard address is typically employed in server-side applications. An application can deliver and retrieve data from the transport layer in multiple alternative ways. For instance, the write() and read() functions are file-oriented functions but can also be used with socket descriptors to send and receive data. For these two file-oriented functions, the Sockets API defines its own specialized functions. For datagram-oriented networking with UDP, the sendto() and the recvfrom() functions can be used. Complementary functions sendmsg() and recvmsg() offer more advanced interfaces for applications [19]. They operate on scatter arrays (multiple non-consecutive I/O buffers instead of just one) and also sup- port so-called ancillary data that refers to meta-data and information related to network packet headers. In addition to providing the rudimentary service of sending and receiving application data, the socket calls also implement access control. The `bind()` and `connect()` limit ingress (but not egress) network access to the socket by setting the allowed local and remote destination end point. Similarly, the `accept()` call effectively constrains remote access to the newly created socket by allowing communications only with the particular client. Functions `send()` and `recv()` are typically used for connection-oriented networking, but can also be used with UDP to limit remote access. ### 2.1.3 Customizing Networking Stack The Sockets API provides certain default settings for applications to interact with the transport layer. The settings can be altered in multiple different ways. With “raw” sockets, a process can basically create its own transport-layer protocol or modify the network-level headers. A privileged process creates a raw socket with constant `SOCK_RAW`. A more constrained way to alter the default behavior of the networking stack is to set socket options with `setsockopt()`. As an example of the options, the `SO_REUSEADDR` socket option can be used to disable the default “grace period” of a locally reserved transport-layer port. By default, consecutive calls to `bind()` with the same port fail until the grace period has passed. Especially during the development of a networking service, this grace period is usually disabled for convenience because the developed service may have to be restarted quite often for testing purposes. ### 2.2 Sockets API Extensions Basic Socket Interface Extensions for IPv6 [5] define additional data structures and constants, including `AF_INET` and `sockaddr_in6`. The extensions also define new DNS resolver functions, `getnameinfo()` and `getaddrinfo()`, as the old ones, `gethostbyname()` and `gethostbyaddr()`, are now obsoleted. The older ones are not thread safe and offer too little control over the resolved addresses. The specification also defines IPv6-mapped IPv4 addresses to improve IPv6 interoperability. An IPv6 application can typically face a choice of multiple source and destination IPv6 pairs to choose from. Picking a pair may not be a simple task because some of the pairs may not even result in a working connectivity. IPv6 Socket API for Source Address Selection [13] defines extensions that restrict the local or remote address to a certain type, for instance, public or temporary IPv6 addresses. The extensions include new socket options to restrict the selection local addresses when, e.g., a client application connects without specifying the source address. For remote address selection, new flags for the `getaddrinfo()` resolver are proposed. The extensions mainly affect client-side connectivity but can affect also at the server side when UDP is being used. The Datagram Congestion Control Protocol (DCCP) is similar to TCP but does not guarantee in-order delivery. An application can use it - with minor changes - by using `SOCK_DCCP` constant when a socket is created. **Multihoming** is becoming interesting because most of the modern handhelds are equipped with, e.g., 3G and WLAN interfaces. In the scope of this work, we associate “multihoming” to hosts with multiple IP addresses typically introduced by multiple network interfaces. Multihoming could be further be further characterized whether it occurs in the initial phases of the connectivity or during established communications. All of the statistics in this article refer to the former case because the latter requires typically some extra logic in the application or additional support from the lower layers. When written correctly, UDP-based applications can support multihoming for initial connectivity and the success of this capability is investigated in detail in this article. However, supporting multihoming in TCP-based applications is more difficult to achieve and requires additional extensions. A solution at the application layer is to recreate connections when they are rendered broken. At the transport layer, Multipath TCP [4] is a TCP-specific solution to support multihoming in a way that is compatible with legacy applications with optional APIs for native applications [16]. The Stream Control Transmission Protocol (SCTP, [21]) implements an entirely new transport protocol with full multihoming capabilities. In a nutshell, SCTP offers a reliable, congestion-aware, message-oriented, in-sequence transport protocol. The minimum requirement to enable SCTP in an existing application is to change the protocol type in `socket()` call to SCTP. However, the application can only fully harness the benefits of the protocol by utilizing the sendmsg() and recvmsg() interface. Also, the protocol supports sharing of a single socket descriptor for multiple simultaneous communication partners; this requires some additional logic in the application. Transport-independent solutions operating at the lower layers include Host Identity Protocol [11] and Site Multihoming by IPv6 Intermediation (SHIM6) [12]. In brief, HIP offers support for end-host mobility, multihoming and NAT traversal. By contrast, SHIM6 is mainly a multihoming solution. From the API perspective, SHIM6 offers backwards compatible identifiers for IPv6—in the sense that they are routable at the network layer—whereas the identifiers in HIP are non-routable. HIP has its own optional APIs for HIP-aware applications [9] but both protocols share the same optional multihoming APIs [8]. Name-based Sockets are a work-in-progress at the IETF standardization forum. While the details of the specification [23] are rather immature and the specification still lacks official consent of the IETF, the main idea is to provide extensions to the Sockets API that replace IP addresses with DNS-based names. In this way, the responsibility for the management of IP addresses is pushed down in the stack, away from the application layer. 2.3 NAT Traversal Private address realms [18] were essentially introduced by NATs, but Virtual Private Networks (VPNs) and other tunneling solutions can also make use of private addresses. Originally, the concept of virtual address spaces was created to alleviate the depletion of the IPv4 address space, perhaps, because it appeared that most client hosts did not need publicly-reachable addresses. Consequently, NATs also offer some security as a side effect to the client side because they discard new incoming data flows by default. To work around NATs, Teredo [7] offers NAT traversal solution based on a transparent tunnel to the applications. The protocol tries to penetrate through NAT boxes to establish a direct end-to-end tunnel but can resort to triangular routing through a proxy in the case of an unsuccessful penetration. 2.4 Transport Layer Security Transport Layer Security (TLS) [22] is a cryptographic protocol that can be used to protect communications above the transport layer. TLS, and its predecessor Secure Socket Layer (SSL), are the most common way to protect TCP-based communications over the Internet. In order to use SSL or TLS, a C/C++ application is usually linked to a library implementation such as OpenSSL or GNU TLS. The application then calls the APIs of the TLS/SSL-library instead of using the APIs of the Sockets API. The functions of the library are wrappers around the Sockets API, and are responsible for securing the data inside the TCP stream. 2.5 Network Frameworks The Sockets API could be characterized as somewhat complicated and error-prone to be programmed directly. It is also “flat” by its nature because it was not designed to accommodate object-oriented languages. For these reasons, a number of libraries and frameworks have been built to hide the details of the Sockets API and to introduce object-oriented interfaces. The Adaptive Communication (ACE) [17] is one such framework. ACE simplifies the development of networking applications because it offers abstracted APIs based on network software patterns observed in well-written software. Among other things, ACE includes network patterns related to connection establishment and service initialization in addition to facilitating concurrent software and distributed communication services. It supports asynchronous communications by inversion of control, i.e., the framework takes over the control of the program flow and it invokes registered functions of the application when needed. Boost::Asio is another open source C++ library that offers high-level networking APIs to simplify development of networking applications. Boost::Asio aims to be portable, scalable, and efficient but, most of all, it provides a starting point for implementing further abstraction. Several Boost C++ libraries have already been included in the C++ Technical Report 1 and in C++11. In 2006 a networking proposal based on Asio was submitted to request inclusion in the upcoming Technical Report 2. Java provides an object-oriented framework for the creation and use of sockets. Java.net package (called Java.net from here on) supports TCP (Socket class) and UDP (Datagram class). These classes implement communication over an IP network. Twisted is a modular, high-level networking framework for python. Similarly to ACE, Twisted is also based on inversion of control and asynchronous messaging. Twisted has built-in support for multiple application-layer protocols, including IRC, SSH and HTTP. What distinguishes Twisted from the other frameworks is the focus on service-level functionality based adaptable functionality that can be run on top of several application-layer protocols. 3 Materials and Methods We collected information related to the use of Sockets API usage in open-source applications. In this article, we refer to this information as indicators. An indicator refers to a constant, structure or function of the C language. We analyzed the source code for indicators in a static way (based on keywords) rather than dynamically. The collected set of indicators was limited to networking-related keywords obtained from the keyword indexes of two books [20, 15]. We gathered the material for our analysis from all of the released Long-Term Support (LTS) releases of Ubuntu: Dapper Drake 6.06, Hardy Heron 8.04, Lucid Lynx 10.04. Table 1 summarizes the number of software packages gathered per release. In the table, “patched” row expresses how many applications were patched by Ubuntu. We used sections “main”, “multiverse”, “universe” and “security” from Ubuntu. The material was gathered on Monday 7th of March 2011 and was constrained to software written using the C language. Since our study was confined to networking applications, we selected only software in the categories of “net”, “news”, “comm”, “mail”, and “web” (in Lucid, the last category was renamed “httpd”). We did not limit or favor the set of applications, e.g., based on any popularity metrics. We believed that an application was of at least of some interest if the application was being maintained by someone in Ubuntu. To be more useful for the community, we analyzed all network applications and did not discriminate some “unpopular” minorities. This way, we did not have to choose between different definitions of popularity—perhaps Ubuntu popularity contest would have served as a decent metric for popularity. We did perform an outlier analysis in which we compared the whole set of applications to the most popular applications (100 or more installations). We discovered that the statistical “footprint” of the popular applications is different from the whole. However, the details are omitted because this contradicted with our goals. In our study, we concentrated on the POSIX networking APIs and Berkeley Sockets API because they form the de-facto, low-level API for all networking applications. However, we extended the API analysis to OpenSSL to study the use of security as well. All of these three APIs have bindings for high-level languages, such as Java and Python, and can be indirectly used from network application frameworks and libraries. As the API bindings used in other languages differs from those used in C language, we excluded other languages from this study. From the data gathered, we calculated sums and means of the occurrences of each indicator. Then we also calculated a separate “reference” number. This latter was formed by introducing a binary value to denote whether a software package used a particular indicator (1) or not (0), independent of the number of occurrences. The reference number for a specific indicator was collected from all software packages, and these reference numbers were then summed and divided by the number of packages to obtain a reference ratio. In other words, the reference ratio describes the extent of an API indicator <table> <thead> <tr> <th></th> <th>Dapper</th> <th>Hardy</th> <th>Lucid</th> </tr> </thead> <tbody> <tr> <td>Total</td> <td>1,355</td> <td>1,472</td> <td>1,147</td> </tr> <tr> <td>Patched</td> <td>1,222</td> <td>1,360</td> <td>979</td> </tr> <tr> <td>C</td> <td>721</td> <td>756</td> <td>710</td> </tr> <tr> <td>C++</td> <td>57</td> <td>77</td> <td>88</td> </tr> <tr> <td>Python</td> <td>126</td> <td>148</td> <td>98</td> </tr> <tr> <td>Ruby</td> <td>19</td> <td>27</td> <td>13</td> </tr> <tr> <td>Java</td> <td>9</td> <td>10</td> <td>8</td> </tr> <tr> <td>Other</td> <td>423</td> <td>454</td> <td>232</td> </tr> </tbody> </table> Table 1: Number of packages per release version. 1Authors believe that a more dynamic or structural analysis would not have revealed any important information on the issues investigated. 2http://www.cs.helsinki.fi/u/sklvarjo/LS12/ with one normalized score. We admit that the reference number is a very coarse grained metric; it indicates capability rather than 100% guarantee that the application will use a specific indicator for all its runs. However, its binary (or “flattened”) nature has one particular benefit that cancels out an unwanted side effect of the static code analysis, but this is perhaps easiest to describe by example. Let us consider an application where memory allocations and deallocations can be implemented in various ways. The application can call `malloc()` a hundred times but then calls `free()` only once. Merely looking at the volumes of calls would give a wrong impression about memory leaks because the application could have a wrapper function for `free()` that is called a hundred times. In contrast, a reference number of 1 for `malloc()` and 0 for `free()` indicates that the application has definitely one or more memory leak. Correspondingly, the reference ratio describes this for the entire population of the applications. In our results, we show also reference ratios of combined indicators that were calculated by taking an union or intersection of indicators, depending on the use case. With combined indicators, we used tightly coupled indicators that make sense in the context of each other. 4 Results and Analysis In this section, we show the most relevant statistical results. We focus on the findings where there is room for improvement or that are relevant to the presented Sockets API extensions. Then, we highlight the most significant patterns or key improvements for the networking applications. Finally, we derive a set of more generic requirements from the key improvements and see how they are met in four different network application frameworks. 4.1 Core Sockets API In this section, we characterize how applications use the “core” Sockets API. Similarly as in the background, the topics are organized into sections on IPv6, DNS, transport protocols and customization of the networking stack. In the last section, we describe a multihoming issue related to UDP. In the results, the reference ratios of indicators are usually shown inside brackets. All numeric values are from Ubuntu Lucid unless otherwise mentioned. Figure 1 illustrates some of the most frequent function indicators by their reference ratio and the following sections analyze the most interesting cases in more detail. 4.1.1 IPv6 According to the usage of AF and PF constants, 39.3% were IPv4-only applications, 0.3% IPv6-only, 26.9% hybrid and 33.5% did not reference either of the constants. To recap, while the absolute use of IPv6 was not high, the relative proportion of hybrid applications supporting both protocols was quite high. 4.1.2 Name Resolution The obsolete DNS name-look-up functions were referenced more than their modern replacements. The obsolete forward look-up function `gethostbyname()` was referenced roughly twice as often as its modern replacement `getaddrinfo()`. Two possible explanations for this are that either that the developers have, for some reason, preferred the obsolete functions, or have neglected to modernize their software. 4.1.3 Packet Transport Connection and datagram-oriented APIs were roughly as popular. Based on the usage of `SOCK_STREAM` and `SOCK_DGRAM` constants, we accounted for 25.1% TCP-only and 11.0% UDP-only applications. Hybrid applications supporting both protocols accounted for 26.3%, leaving 37.6% of the applications that used neither of the constants. By combining the hybrids with TCP-only applications, the proportion of applications supporting TCP is 51.4% and, correspondingly, 37.3% for UDP. It should not be forgotten that typically all network applications implicitly access DNS over UDP by default. 4.1.4 Customizing Networking Stack While the Sockets API provides transport-layer abstractions with certain system-level defaults, many applications preferred to customize the networking stack or to override some of the parameters. The combined reference ratio of SOCK_RAW, setsockopt(), pcap_pkthdr and ipq_create_handle() indicators was 51.4%. In other words, the default abstraction or settings of the Sockets API are not sufficient for the majority of the applications. It is worth mentioning that we conducted a brute-force search to find frequently occurring socket options sets. As a result, we did not find any recurring sets but merely individual socket options that were popular. 4.1.5 Multihoming and UDP In this section, we discuss a practical issue related to UDP-based multihoming, but one which could be fixed in most applications by the correct use of SO_BINDTODEVICE (2.3%) socket option. The issue affects UDP-based applications accepting incoming connections from multiple interfaces or addresses. On Linux, we have reason to believe that many UDP-based applications may not handle multihoming properly for initial connections. The multihoming problem for UDP manifests itself only when a client-side application uses a server address that does not match with the default route at the server. The root of the problem lies in egress datagram processing at the server side. The UDP problem occurs when the client sends a “request” message to the server and the server does not send a “response” using the exact same address pair that was used for the request. Instead, the sloppy server implementation responds to the client without specifying the source address, and the networking stack invariably chooses always the wrong source address - meaning that the client drops the response as it appears to be arriving from a previously unknown IP address. A straightforward fix is to modify the server-side processing of the software to respect the original IP address, and thus to prevent the network stack from routing the packet incorrectly. In other words, when the server-side application receives a request, it should remember the local address of the received datagram and use it explicitly for sending the response. Explicit source addressing can be realized by using the modern sendmsg() interface. However, a poorly documented alternative to be used especially with the sendto() function is the socket option called SO_BINDTODEVICE. The socket option is necessary because bind() can only be used to specify the local address for the ingress direction (and not the egress). We discovered the UDP problem by accident with iperf, nc and nc6 software. We have offered fixes to maintainers of these three pieces of software. Nevertheless, the impact of the problem may be larger as a third of the software in our statistics supports UDP explicitly. To be more precise, the lack of SO_BINDTODEVICE usage affects 45.7% (as an upper bound) of the UDP-capable software, which accounts for a total of 121 applications. This figure was calculated by finding the intersection of all applications not using sendmsg() and SO_BINDTODEVICE, albeit still using sendto() and SOCK_DGRAM. We then divided this by the number of applications using SOCK_DGRAM. 4.2 Sockets API Extensions In this section, we show and analyze statistics on SSL and the adoption of a number of Sockets API extensions. 4.2.1 Security: SSL/TLS Extensions Roughly 10.9% of the software in the data set used OpenSSL and 2.1% GNU TLS. In this section, we limit the analysis on OpenSSL because it is more popular. Unless separately mentioned, we will, for convenience, use the term SSL to refer both TLS and SSL protocols. We only present reference ratios relative to the applications using OpenSSL because this is more meaningful from the viewpoint of the analysis. In other words, the percentages account only the 77 OpenSSL-capable applications and not the whole set of applications. The applications using OpenSSL consisted of both client and server software. The majority of the applications using OpenSSL (54%) consisted of email, news and messaging software. The minority included network security and diagnostic, proxy, gateway, http and ftp server, web browsing, printing and database software. The reference ratios of SSL options remained roughly the same throughout the various Ubuntu releases. The use of SSL options in Ubuntu Lucid is illustrated in Figure 2. The use of SSL_get_verify_result() function (37.7%) indicates that a substantial proportion of SSL-capable software has interest in obtaining the results of the certificate verification. The SSL_get_peer_certificate() function (64.9%) is used to obtain the certificate sent by the peer. The use of the SSL_CTX_use_privatekey_file() function (62.3%) implies that a majority of the software is capable of using private keys stored in files. A third (27.3%) of the applications use the SSL_get_current_cipher() function to request information about the cipher used for the current session. The SSL_accept() function (41.6%) is the SSL equivalent for accept(). The reference ratio of SSL_connect() function (76.6%), an SSL equivalent for connect(), is higher than for ssl_accept() (41.6%). This implies that the data set includes more client-based applications than server-based. Furthermore, we observed that SSL_shutdown() (63.6%) is referenced in only about half of the software that also references SSL_connect(), indicating that clients leave dangling connections with servers (possibly due to sloppy coding practices). We noticed that only 71.4% of the SSL-capable software initialized the OpenSSL library correctly. The correct procedure for a typical SSL application is that it should initialize the library with SSL_library_init() function (71.4%) and provide readable error strings with SSL_load_error_strings() function (89.6%) before any SSL action takes place. However, 10.4% of the SSL-capable software fails to provide adequate error handling. Only 58.4% of the SSL-capable applications seed the Pseudo Random Number Generator (PRNG) with RAND_load_file() (24.7%), RAND_add() (6.5%) or RAND_seed() (37.7%). This is surprising because incorrect seeding of the PRNG is considered a common security pitfall. Roughly half of the SSL-capable software set the context options for SSL with SSL_CTX_set_options (53.3%); this modifies the default behavior of the SSL implementation. The option SSL_OP_ALL (37.7%) enables all bug fixes. SSL_OP_NO_SSLV2 option (31.2%) turns off SSLv2 and respectively SSL_OP_NO_SSLV3 (13.0%) turns off the support for SSLv3. The two options were usually combined so that the application would just use TLSv1. SSL_OP_SINGLE_DH_USE (7.8%) forces the implementation to re-compute the private part of the Diffie-Hellman key exchange for each new connection. With the exception of low-performance CPUs, it is usually recommended that this option to be turned on since it improves security. The option SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS (6.5%) disables protection against an attack on the block-chaining ciphers. The countermeasure is disabled because some of the SSLv3 and TLSv1 implementations are unable to handle it properly. 37.7% of the SSL-capable software prefers to use only TLSv1 (TLSv1_client_method()) and 20.1% of the SSL-capable software prefers to fall back from TLSv1 to SSLv3 when the server does not support TLSv1. However, the use of SSL_OP_NO_TLsv1 option indicates that 7% of the software is able to turn off TLSv1 support completely. SSL_OP_CIPHER_SERVER_PREFERENCE is used to indicate that the server’s preference in the choosing of the cipher takes precedence. SSL_OP_NO_SESSION_RESUMPTION_RENEGOTIATION indicates the need for increased security as session resumption is dis- allowed and a full handshake is always required. The remaining options are workarounds for various bugs. As a summary of the SSL results, it appears that SSL- capable applications are interested of the details of the security configuration. However, some applications ini- tialize OpenSSL incorrectly and also trade security for backwards compatibility. 4.2.2 IPv6-Related Extensions During the long transition to IPv6, we believe that the simultaneous co-existence of IPv4 and IPv6 still repre- sents problems for application developers. For example, IPv6 connectivity is still not guaranteed to work every- where. At the client side, this first appears as a prob- lem with DNS look-ups if they are operating on top of IPv6. Therefore, some applications may try to look up simultaneously over IPv4 and IPv6 [25]. After this, the application may even try to call connect() simultane- ously over IPv4 and IPv6. While these approaches can decrease the initial latency, they also generate some ad- ditional traffic to the Internet and certainly complicate networking logic in the application. At the server side, the applications also have to main- tain two sockets: one for IPv4 and another for IPv6. We believe this unnecessarily complicates the network pro- cessing logic of applications and can be abstracted away by utilizing network-application frameworks. An immediate solution to the concerns regarding ad- dress duplication is proposed in RFC4291 [6], which describes IPv6-mapped IPv4 addresses. The idea is to embed IPv4 addresses in IPv6 address structures and thus to provide a unified data structure format for storing addresses in the application. Mapped addresses can be employed either manually or by the use of AI_V4MAPPED flag for the getaddrinfo() resol ver. However, the application first has to explicitly enable the IPv6_V6ONLY socket option (0.1%) before the networking stack will allow the IPv6-based socket to be used for IPv4 networking. By default, IPv4 connectivity with IPv6 sockets is disallowed in Linux because they introduce security risks [10]. As a bad omen, of the total six applications referencing the AI_V4MAPPED flag, only one of them set the socket option as safe guard. The constants introduced by the IPv6 Socket API for Source Address Selection [13] are available in Ubuntu Lucid even though the support is incomplete. The flags to extend the getaddrinfo() resolver and the proposed auxiliary functions remain unavailable and only source address selection through socket options is available. Nevertheless, we calculated the proportion of IPv6- capable client-side applications that explicitly choose a source address. As an upper bound, 66.9% percent ap- lications choose source addresses explicitly based the dual use of connect() and bind(). This means that a majority of IPv6 applications might be potentially inter- ested of the extensions for IPv6 Socket API for Source Address Selection. 4.2.3 Other Protocol Extensions The use of SCTP was very minimal in our set of applications and only three applications used SCTP. Netperf is a software used for benchmarking the network performance of various protocols. Openser is a flexible SIP proxy server. Linux Kernel SCTP tools (lksctp-tools) can be used for testing SCTP functionality in the userspace. As with SCTP, DCCP was also very unpopular. It was referenced only from a single software package, despite it being easier to embed in an application by merely using the SOCK_DCCP constant in the socket creation. As described earlier, multipath TCP, HIP and SHIM6 have optional native APIs. The protocols can be used transparently by legacy applications. This might boost their deployment when compared with the mandatory changes in applications for SCTP and DCCP. The APIs for HIP-aware applications [9] may also face a similar slow adoption path because the APIs require a new domain type for sockets in the Linux kernel. While getaddrinfo() function can conveniently look up “wildcard” domain types, the success of this new DNS resolver (23.5%) is still challenged by the deprecated gethostbyname() (43.3%). SHIM6 does not face the same problem as it works without any changes to the resolver and connections can be transparently “upgraded” to SHIM6 during the communications. The shared multihoming API for HIP- and SHIM6-aware applications [8] may have a smoother migration path. The API relies heavily on socket options and little on ancillary options. This strikes a good balance because setsockopt() is familiar to application developers (42.8%) and sendmsg() / recvmsg() with its ancillary option is not embraced by many (7%). The same applies to the API for Multipath TCP [16] that consists solely of socket options. 4.2.4 A Summary of the Sockets API Findings and Their Implications Table 2 highlights ten of the most important findings in the Sockets APIs. Next, we go through each of them and argue their implications to the development of network applications. <table> <thead> <tr> <th>Core Sockets API</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>1 IPv4-IPv6 hybrids</td> <td>26.9%</td> </tr> <tr> <td>2 TCP-UDP hybrids</td> <td>26.3%</td> </tr> <tr> <td>3 Obsolete DNS resolver</td> <td>43.3%</td> </tr> <tr> <td>4 UDP-based apps with multihoming issue</td> <td>45.7%</td> </tr> <tr> <td>5 Customize networking stack</td> <td>51.4%</td> </tr> </tbody> </table> <table> <thead> <tr> <th>OpenSSL-based applications</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>6 Fails to initialize correctly</td> <td>28.6%</td> </tr> <tr> <td>7 Modifies default behavior</td> <td>53.3%</td> </tr> <tr> <td>8 OpenSSL-capable applications in total</td> <td>10.9%</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Estimations on IPv6-related extensions</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>9 Potential misuse with mapped addresses</td> <td>83.3%</td> </tr> <tr> <td>10 Explicit IPv6 Source address selection</td> <td>66.9%</td> </tr> </tbody> </table> Finding 1. The number of hybrid applications supporting both IPv4 and IPv6 was fairly large. While this is a good sign for the deployment of IPv6, the dual addressing scheme doubles the complexity of address management in applications. At the client side, the application has to choose whether to handle DNS resolution over IPv4 or IPv6, and then create the actual connection with either family. As IPv6 does not even work everywhere yet, the client may initiate communications in parallel with IPv4 and IPv6 to minimize latency. Respectively, server-side applications have to listen for incoming data flows on both families. Finding 2. Hybrid applications using both TCP and UDP occur as frequently as TCP-only applications. Application developers seem to write many application protocols to be run with both transports. While it is possible to write almost identical code for the two transports, the Sockets API favors different functions for the two. This unnecessarily complicates the application code. Finding 3. The obsolete DNS resolver was referenced twice as frequently as the new one. This has negative implications on the adoption of new Sockets API extensions that are dependent on the new resolver. As concrete examples, native APIs for HIP and source address selection for IPv6 may experience a slow adoption path. Finding 4. We discovered a UDP multihoming problem at the server side based on our experiments with three software included in the data set. As an upper bound, we estimated that the same problem affects 45.7% of the UDP-based applications. Finding 5. Roughly half of the networking software is not satisfied with the default configuration of networking stack and alters it with socket options, raw sockets or other low-level hooking. However, we did not discover any patterns (besides few popular, individually recurring socket options) to propose as new compound socket option profiles for applications. Findings 6, 7 and 8. Roughly every tenth application was using OpenSSL but surprisingly many failed to initialize it appropriately, thus creating potential security vulnerabilities. Half of the OpenSSL-capable applications were modifying the default configuration in some way. Many of these tweaks improved backwards compatibility at the expense of security. This opens a question why backwards compatibility is not well built into OpenSSL and why so many “knobs” are even offered to the developer.3 Finding 9. IPv6-mapped IPv4 addresses should not be leaked to the wire for security reasons. As a solution, the socket option IPV6_V6ONLY would prevent this leakage. However, only one out of total six applications using mapped addresses were actually using the socket option. Despite the number of total applications using mapped address in general was statistically small, this is an alarming sign because the number can grow when the number of IPv6 applications increases. Finding 10. IPv6 source address selection lets an application to choose the type of an IPv6 source address instead of explicitly choosing one particular address. The extensions are not adopted yet, but we estimated the need for them in our set of applications. Our coarse-grained estimate is that two out of three IPv6 applications might utilize the extensions. We have now characterized current trends with C-based applications using Sockets API directly and highlighted ten important findings. Of these, we believe findings 3, 4, 6 and 9 can be directly used to improved the existing applications in our data set. We believe that most of the remaining ones are difficult to improve without introducing changes to the Sockets API (findings 1, 2, 5) or without breaking interoperability (finding 7). Also, many of the applications appear not to need security at all (finding 8) and the adoption of extensions (finding 10) may just take some time. As some of the findings are difficult to adapt to the applications using Sockets API directly, perhaps indirect approaches as offered by network application frameworks may offer easier migration path. For example, the first two findings are related to management of complexity in the Sockets API and frameworks can be used to hide such complexity from the applications. 4.3 Network Application Frameworks In this section, we investigate four network application frameworks based the Sockets and POSIX API. In a way, these frameworks are just other “applications” using the Sockets API and, thus, similarly susceptible to the same analysis as the applications in the previous sections. However, the benefits of improving a single framework transcends to numerous applications as frameworks are utilized by several applications. The Sockets API may be difficult to change, but can be easier to change the details how a framework implements the complex management of the Sockets API behind its high-level APIs. 4.3.1 Generic Requirements for Modern Frameworks Instead of applying the highlighted findings described in Section 4.2.4 directly, some modifications were made due to the different nature of network application frameworks. Firstly, we reorganize the analysis “top down” and split the topics into end-host naming, look up, multiplicity of names and transport protocols and security. We also believe that the reorganization may be useful for extending the analysis in the future. Secondly, we arrange the highlighted findings according to their topic. A high-level framework does not have to follow the IP address oriented layout of the Sockets API and, thus, we investigate the use of symbolic host names as well. The reconfiguration of the stack (finding 5) was popular but we could not suggest any significant improvements on it, so it is omitted. Finally, we split initiating of parallel connectivity with IPv4 and IPv6 as their own requirements for both transport connections and DNS look ups. Consequently, the following list reflects the Sockets API findings as modified requirements for network application frameworks: R1: End-host naming R1.1 Does the API of the framework support symbolic host names in its APIs, i.e., does the framework hide the details of hostname-to-address resolution from the application? If this is true, the framework conforms to a similar API as proposed by Name Based Sockets as described in section 2.2. A benefit of this approach is that implementing requirements R1.2, R2.2, R3.1 and 3.3 becomes substantially easier. R1.2 Are the details of IPv6 abstracted away from the application? In general, this requirement facilitates adoption of IPv6. It could also be used for supporting Teredo based NAT traversal transparently in the framework. R1.3 IPv6-mapped addresses should not be present on the wire for security reasons. Thus, the framework should manually convert mapped addressed to regular IPv4 addresses before passing to any Sockets API calls. Alternatively, the frameworks can use the AI_V4MAPPED option as a safe guard to prevent such leakage. R2: Look up of end-host names R2.1 Does the framework implement DNS look ups with getaddrinfo()? This is important for IPv6 source address selection and native HIP API extensions because they are dependent on this particular function. R2.2 Does the framework support parallel DNS look ups over IPv4 and IPv6 to optimize latency? R3: Multiplicity of end-host names R3.1 IPv6 source address selection is not widely adopted yet but is the framework modular enough to support it especially at the client side? As a concrete example, the framework should support inclusion of new parameters to its counterpart of connect() call to support application preferences for source address types. R3.2 Does the server-side multihoming for UDP work properly? As described earlier, the framework should use SO_BINDTODEVICE option or sendmsg()/recvmsg() interfaces in a proper way. R3.3 Does the framework support parallel connect() over IPv4 and IPv6 to minimize the latency for connection set-up? R4: Multiplicity of transport protocols R4.1 Are TCP and UDP easily interchangeable? “Easy” here means that the developer merely changes one class or parameter but the APIs are the same for TCP and UDP. It should be noted that this has also implications on the adoption of SCTP and DCCP. R5: Security R5.1 Does the framework support SSL/TLS? R5.2 Does the SSL/TLS interface provide reasonable defaults and abstraction so that the developer does not have to configure the details of the security? R5.3 Does the framework initialize the SSL/TLS implementation automatically? 4.3.2 ACE ACE version 6.0.0 denotes one end of a transport-layer session with ACE_INET_Addr class that can be initiated both based on a symbolic host name and a numeric IP address. Thus, the support for IPv6 is transparent if the developer relies solely on host names and uses AF_UNSPEC to instantiate the class. ACE also supports storing of IPv4 addresses in the IPv6-mapped format internally but translates them to the normal IPv4 format before returning them to the requesting application or using on the wire. In ACE, IP addresses can be specified using strings. This provides a more unified format to name hosts. ACE supports getaddrinfo() function and resorts to getnameinfo() only when the OS (e.g. Windows) does not support getaddrinfo(). With UDP, ACE supports both connected (class \texttt{ACE\_SOCK\_CODgram}) and disconnected communications (class \texttt{ACE\_SOCK\_Dgram}). We verified the UDP multihoming problem with test software included in the ACE software bundle. More specifically, we managed to repeat the problem with connected sockets which means that the ACE library shares the same bug as iperf, nc and nc6 software as described earlier. Disconnected UDP communications did not suffer from this problem because ACE does not fix the remote communication end-point for such communications with \texttt{connect}(). It should be also noted that a separate class, \texttt{ACE\_Multihomed\_INET\_Addr}, supports multiaddressing natively. A client can connect to a server using TCP with class \texttt{ACE\_SOCK\_Connector} in ACE. The instantiation of the class supports flags which could be used for extending ACE to support IPv6 source address selection in a backwards compatible manner. While the instantiation of connected UDP communications does not have a similar flag, it still includes few integer variables used as binary arguments that could be overloaded with the required functionality. Alternatively, new instantiation functions with different method signature could be defined using C++. As such, ACE seems modular enough to adopt IPv6 source address selection with minor changes. For basic classes, ACE does not support accepting of communications simultaneously with both IPv4 and IPv6 at the server side. Class \texttt{ACE\_Multihomed\_INET\_Addr} has to be used to support such behaviour more seamlessly but it can be used both at the client and server side. Changing of the transport protocol in ACE is straightforward. Abstract class \texttt{ACE\_Sock\_IO} defines the basic interfaces for sending and transmitting data. The class is implemented by two classes: an application instantiates \texttt{ACE\_Sock\_Stream} class to use TCP or \texttt{ACE\_SOCK\_Dgram} to use UDP. While both TCP and UDP-specific classes supply some additional transport-specific methods, switching from one transport to another occurs merely by renaming the type of the class at the instantiation, assuming the application does not need the transport-specific methods. ACE supports SSL albeit it is not as interchangeable as TCP with UDP. ACE has wrappers around \texttt{accept()} and \texttt{connect()} calls in its Acceptor-Connector pattern. This hides the intricacies of SSL but all of the low-level details are still configurable when needed. SSL is initialized automatically and correctly. ### 4.3.3 Boost::Asio Boost::Asio version 1.47.0 provides a class for denoting one end of a transport-layer session called endpoint that can be initiated through resolving a host name or a numeric IP. By default, the resolver returns a set of endpoints that may contain both IPv4 and IPv6 addresses. These endpoints can be given directly to the \texttt{connect()} wrapper in the library that connects sequentially to the addresses found in the endpoint set until it succeeds. Thus, the support for IPv6 is transparent if the developer has chosen to rely on host names. Boost::Asio can store IPv4 addresses in the IPv6-mapped form. By default, the mapped format is used only when the developer explicitly sets the family of the address to be queried to IPv6 and the query results contain no IPv6 addresses. The mapped format is only used internally and converted to IPv4 before use on the wire. Boost::Asio uses POSIX \texttt{getaddrinfo()} when the underlying OS supports it. On systems such as Windows (older than XP) and Cygwin, Boost::Asio emulates \texttt{getaddrinfo()} function by calling \texttt{gethostbyaddr()} and \texttt{gethostbyname()} functions. The resolver in Boost::Asio includes flags that could be used for implementing source address selection (and socket options are supported as well). Boost::Asio does not support parallel IPv4 and IPv6 queries, nor does it provide support for simultaneous connection set up using both IPv4 and IPv6. We verified the UDP multihoming problem with example software provided with the Boost::Asio. We managed to repeat the UDP multihoming problem with connected sockets which means that the Boost::Asio library shares the same bug as iperf, nc and nc6 as described earlier. Boost::Asio defines basic interfaces for sending and receiving data. An application instantiates \texttt{ip::tcp::socket} to use TCP or \texttt{ip::udp::socket} to use UDP. While both classes provide extra transport-specific methods, switching from one transport to another occurs merely by renaming the type of the class at the instantiation, assuming the application does not need the transport-specific methods. --- \textsuperscript{4}IPv6 addresses are queried only when IPv6 loopback is present. stantiation assuming the application does not need the transport-specific methods. Boost::Asio supports SSL and TLS. The initialization is wrapped into the SSL context creation. In Boost::Asio, the library initialization is actually done twice as OpenSSL_add_ssl_algorithms() is a synonym of SSL_library_init() and both are called sequentially. PRNG is not automatically initialized with RAND_load_file(), RAND_add() or RAND_seed(), although Boost::Asio implements class random_device which can be easily used in combination with RAND_seed() to seed the PRNG. 4.3.4 Java.net Java.net in OpenJDK Build b147 supports both automated connections and manually created ones. Within a single method that inputs a host name, its API hides resolving a host name to an IP address from DNS, creation of the socket and connecting the socket. Alternatively, the application can manage all of the intermediate steps by itself. The API has a data structure to contain multiple addresses from DNS resolution. The default is to try a connection only with a single address upon request, albeit this is configurable. The internal presentation of a single address, InetAddress, can hold an IPv4 or IPv6 address and, therefore, the address family is transparent when the developer resorts solely on the host names. The API supports v4_mappedaddress format as an internal presentation format but it is always converted to the normal IPv4 address format before sending data to the network. Before using IPv6, Java.net checks the existence of the constant AF_INET6 and that a socket can be associated with a local IPv6 address. If java.net discovers support for IPv6 in the local host, it uses the getaddrinfo() but otherwise gethostbyname() function for name resolution. DNS queries simultaneously over IPv4 and IPv6 are not supported out-of-the-box. However, the SIP ParallelResolver package in SIP communicator\(^5\) could be used to implement such functionality. We verified the UDP multihoming problem with example software provided with the java.net. We managed to repeat the UDP multihoming problem with connected sockets. This means that the java.net library shares the same bug as iperf, nc and nc6 as described earlier. Java.net naming convention favors TCP because a “socket” always refers to a TCP-based socket. If the developer needs a UDP socket, he or she has to instantiate a DatagramSocket class. Swapping between the two protocols is not trivial because TCP-based communication uses streams, where as UDP-based communication uses DatagramPacket objects for I/O. IPv6 source address selection is implementable in java.net. TCP and UDP-based sockets could include a new type of constructor or method, and java has socket options as well. The method for DNS look ups, InetAddress.getByName(), is not extensive enough and would need an overloaded method name for the purpose. Java.net supports both SSL and TLS. Their details are hidden by abstraction, although it is possible to configure them explicitly. All initialization procedures are automatic. 4.3.5 Twisted With Twisted version 10.2, python-based applications can directly use host names to create TCP-based connections. However, the same does not apply to UDP; the application has to manually resolve the host name into an IP address before use. With the exception of resolving of AAAA records from the DNS, IPv6 support is essentially missing from Twisted. Thus, mapped addresses and parallel connections over IPv4 and IPv6 remain unsupported due to lack of proper IPv6 support. Some methods and classes include “4” suffix to hard code certain functions only to IPv4 which can hinder IPv6 interoperability. Introducing IPv6 source address selection to Twisted would be relatively straightforward, assuming IPv6 support is eventually implemented. For example, Twisted methods wrappers for connect() function input host names. Therefore, the methods could be adapted to include a new optional argument to specify source address preferences. The twisted framework uses gethostbyname() but has also its own implementation of DNS, both for the client \(^5\)net.java.sip.communicator.util.dns.ParallelResolver and server side. As IPv6 support is missing, the framework cannot support parallel lookups. The UDP multihoming issue is also present in Twisted. We observed this by experimenting with a couple of client and server UDP applications in the Twisted source package. TCP and UDP are quite interchangeable in Twisted when the application uses the Endpoint class because it provides abstracted read and write operations. However, two discrepancies exist. First, Creator class is tainted by TCP-specific naming conventions in its method connectTCP(). Second, applications cannot read or write UDP datagrams directly using host names but first have to resolve them into IP addresses. Twisted supports TLS and SSL in separate classes. TLS/SSL can be plugged into an application with relative ease due to modularity and high-level abstraction of the framework. The details of SSL/TLS are configurable and Twisted provides defaults for applications that do not need special configurations. With the exception of seeding the PRNG, the rest of the details of TLS/SSL initialization are handled automatically. 4.3.6 A Summary of the Framework Results We summarize how the requirements were met by each of the four frameworks in Table 3. Some of the requirements were unmet in all of the frameworks. For example, all frameworks failed to support UDP-based multihoming (R3.2) and parallel IPv4/IPv6 connection initialization for clients (R3.3). Also, SSL/TLS initialization (R5.3) was not implemented correctly in all frameworks. In total, 56% of our requirements were completely met in all of the frameworks. 5 Related and Future Work At least three other software-based approaches to analyze applications exist in the literature. Camara et al. [3] developed software and models to verify certain errors in applications using the Sockets API. Ammons et al. [1] have investigated machine learning to reverse engineer protocol specifications from source code based on the Sockets API. Palix et al. [14] have automatized finding of faults in the Linux kernel and conducted a longitudinal study. <table> <thead> <tr> <th>Req.</th> <th>ACE</th> <th>Boost::Asio</th> <th>Java.net</th> <th>Twisted</th> </tr> </thead> <tbody> <tr> <td>R1.1</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R1.2</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R1.3</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>N/A</td> </tr> <tr> <td>R2.1</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R2.2</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R3.1</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>R3.2</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R3.3</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R4.1</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>R5.1</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>R5.2</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> </tr> <tr> <td>R5.3</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> </tbody> </table> Table 3: Summary of how the frameworks meet the requirements We did not focus on the development of automatized software tools but rather on the discovery of a number of novel improvements to applications and frameworks using the Sockets API. While our findings could be further automatized with the tools utilized by Camara, Ammons and Palix et al., we believe such an investigation would be in the scope of another article. Similarly to our endeavors with multihoming, Multiple Interfaces working group in the IETF tackles the same problem but in broader sense [2, 24]. Our work supplements their work, as we explained a very specific multihoming problem with UDP, the extent of the problem in Ubuntu Linux and the technical details how the problem can be addressed by developers. 6 Conclusions In this article, we showed empirical results based on a statistical analysis of open-source network software. Our aim was to understand how the Sockets APIs and its extensions are used by network applications and frameworks. We highlighted ten problems with security, IPv6 and configuration. In addition to describing the generic technical solution, we also reported the extent of the problems. As the most important finding, we discovered that 28.6% of the C-based network applications in Ubuntu are vulnerable to attacks because they fail to initialize OpenSSL properly. We applied the findings with C-based applications to four example frameworks based on the Sockets API. Contrary to the C-based applications, we analyzed the frameworks in a top-down fashion along generalized dimensions of end-host naming, multiplicity of names and transports, name look up and security. Consequently, we proposed 12 networking requirements that were completely met by a little over half of the frameworks in total. For example, all four frameworks consistently failed to support UDP-based multihoming and parallel IPv4/IPv6 connection initialization for the clients. Also the TLS/SSL initialization issue was present in some of the frameworks. With the suggested technical solutions for Linux, we argue that hand-held devices with multi-access capabilities have improved support for UDP, the end-user experience can be improved by reducing latency in IPv6 environments and security is improved for SSL/TLS in general. 7 Acknowledgments We would like to thank Tao Wan for his initial work with the topic. We appreciate the discussion with Dmitriy Kuptsov, Antti Louko, Teemu Koponen, Antti Ylä-Jääski, Jukka Nurminen, Andrey Lukyanenko, Boris Nechaev, Zhonghong Ou, Cui Yong, Vern Paxson, Stefan Götz and Suvi Koskinen around the topic. The authors also express their gratitude to anonymous reviewers for their comments. This work was supported by grant numbers 139144 and 135230 from the Academy of Finland. References
{"Source-Url": "https://www.kernel.org/doc/ols/2012/ols2012-komu.pdf", "len_cl100k_base": 13512, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 51721, "total-output-tokens": 15247, "length": "2e13", "weborganizer": {"__label__adult": 0.0003447532653808594, "__label__art_design": 0.00026035308837890625, "__label__crime_law": 0.0003495216369628906, "__label__education_jobs": 0.0006260871887207031, "__label__entertainment": 0.00010228157043457033, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.0002130270004272461, "__label__food_dining": 0.00027561187744140625, "__label__games": 0.00064849853515625, "__label__hardware": 0.0020771026611328125, "__label__health": 0.0003914833068847656, "__label__history": 0.0003333091735839844, "__label__home_hobbies": 7.098913192749023e-05, "__label__industrial": 0.0004425048828125, "__label__literature": 0.0002236366271972656, "__label__politics": 0.0002567768096923828, "__label__religion": 0.0004346370697021485, "__label__science_tech": 0.0858154296875, "__label__social_life": 0.00010079145431518556, "__label__software": 0.0250244140625, "__label__software_dev": 0.880859375, "__label__sports_fitness": 0.00029659271240234375, "__label__transportation": 0.00066375732421875, "__label__travel": 0.00024008750915527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66394, 0.03485]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66394, 0.2372]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66394, 0.90876]], "google_gemma-3-12b-it_contains_pii": [[0, 3640, false], [3640, 7837, null], [7837, 12587, null], [12587, 16926, null], [16926, 21404, null], [21404, 25386, null], [25386, 28596, null], [28596, 32928, null], [32928, 35929, null], [35929, 40368, null], [40368, 44671, null], [44671, 48088, null], [48088, 52895, null], [52895, 57065, null], [57065, 61384, null], [61384, 65044, null], [65044, 66394, null], [66394, 66394, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3640, true], [3640, 7837, null], [7837, 12587, null], [12587, 16926, null], [16926, 21404, null], [21404, 25386, null], [25386, 28596, null], [28596, 32928, null], [32928, 35929, null], [35929, 40368, null], [40368, 44671, null], [44671, 48088, null], [48088, 52895, null], [52895, 57065, null], [57065, 61384, null], [61384, 65044, null], [65044, 66394, null], [66394, 66394, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66394, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66394, null]], "pdf_page_numbers": [[0, 3640, 1], [3640, 7837, 2], [7837, 12587, 3], [12587, 16926, 4], [16926, 21404, 5], [21404, 25386, 6], [25386, 28596, 7], [28596, 32928, 8], [32928, 35929, 9], [35929, 40368, 10], [40368, 44671, 11], [44671, 48088, 12], [48088, 52895, 13], [52895, 57065, 14], [57065, 61384, 15], [61384, 65044, 16], [65044, 66394, 17], [66394, 66394, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66394, 0.12308]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
342d5426435d4d1ff966f9ac79c26fa133f713d1
An ML Editor Based on Proofs-as-Programs Jon Whittle\(^1\), Alan Bundy\(^1\), Richard Boulton\(^1\), and Helen Lowe\(^2\) \(^1\) Division of Informatics, University of Edinburgh, 80 South Bridge, Edinburgh EH1 1HN, Scotland. \(^2\) Dept of Computer Studies, Glasgow Caledonian University, City Campus, Cowcaddens Road, Glasgow G4 0BA, Scotland. jonathw@dai.ed.ac.uk Abstract. CYNTTHIA is a novel editor for the functional programming language ML in which each function definition is represented as the proof of a simple specification. Users of CYNTTHIA edit programs by applying sequences of high-level editing commands to existing programs. These commands make changes to the proof representation from which a new program is then extracted. The use of proofs is a sound framework for analysing ML programs and giving useful feedback about errors. Amongst the properties analysed within CYNTTHIA at present is termination. CYNTTHIA has been successfully used in the teaching of ML in two courses at Napier University. 1 Introduction Current programming environments for novice functional programming (FP) are inadequate. This paper describes ways of using mechanised theorem proving to improve the situation, in the context of the language ML [9]. ML is a strongly-typed FP language with type inference [4]. ML incorporates extensive use of pattern matching. Datatypes are defined by a number of constructors which can be used to write patterns which define a function. The most common way to write ML programs is via a text editor and compiler (such as the Standard ML of New Jersey compiler). Such an approach is deficient in a number of ways. Program errors, in particular type errors, are generally difficult to track down. For novices, the lack of debugging support forms a barrier to learning FP concepts [14]. CYNTTHIA is an editor for a subset of ML that provides improved support for novices. Programs are created incrementally using a collection of correctness-preserving editing commands. Users start with an existing program which is adapted by the using the commands. This means fewer errors are made. CYNTTHIA’s improved error-feedback facilities enable errors to be corrected more quickly. Specifically, CYNTTHIA provides the following correctness guarantees: 1. syntactic correctness; 2. static semantic correctness, including type correctness as well as checking for undeclared variables or functions, or duplicate variables in patterns etc.; 3. well-definedness — all patterns are mutually exhaustive and have no redundant matches; 4. termination. Note that, in contrast to the usual approach, correctness-checking is done incrementally. Errors (1), (3) and (4) can never be introduced into CYNTHIA programs. (2) may be introduced as in general it is impossible to transform one program into another without passing through states containing such errors. However, all such errors are highlighted to the user by colouring program expressions in the program text. The incremental nature of CYNTHIA means that as soon as an error is introduced, it is indicated to the user, although the user need not change it immediately. In CYNTHIA, each ML function definition is represented as a proof of a specification of that function, using the idea of proofs-as-programs [6]. As editing commands are applied, the proof is developed hand-in-hand with the program, as given in Fig. 1. The user starts with an existing program and a corresponding initial proof (from an initial library). The edits are actually applied to the proof, giving a new partial proof which may contain gaps or inconsistencies. CYNTHIA attempts to fill these gaps and resolve inconsistencies. Any which cannot be resolved are fed back to the user as program errors. ![Diagram](image.png) **Fig. 1.** Editing Programs in CYNTHIA. CYNTHIA’s proofs are written in Oyster [3], a proof-checker implementing a variant of Martin-Löf Type Theory [7]. Oyster specifications (or conjectures) may be written to any level of detail, but to make the proof process tractable in real-time, CYNTHIA specifications are restricted severely. Specifications state precisely the type of the function and various lemmas needed for termination analysis (see §3.1). Proofs of such specifications provide guarantees (1)-(4) above. Given this restriction, all theorem proving can be done automatically. The type systems of Oyster and ML are not quite the same. In particular, in ML type-checking is decidable which is not true of Oyster. However, it is possible to restrict to a subset of Oyster’s types which resembles that of ML very closely. We only consider a functional subset of the Core ML language [14]. In addition, we exclude mutual recursion and type inference. Mutual recursion could be added by extending the termination checker. We made a conscious decision to insist that the user provide type declarations. This is because the system is primarily intended for novices and investigations have shown that students find type inference confusing [14]. Given that edits are done incrementally anyway, providing a type declaration is not too burdensome. A possible future project is to extend CYNTHIA for expert users. This version would include type inference. 2 An Example of CYNTHIA in Action Fig. 2 shows an example of an interaction with CYNTHIA. The datatypes exp and statement and the function unparse exp are already defined. They represent the abstract syntax of a simple imperative programming language. unparse exp is an unparsers for expressions. Suppose the user wishes to modify this function into a function, unparse st, to unpars statements. unparse st can be generated by applying a sequence of CYNTHIA's edits to unparse exp. The first thing to do is to apply RENAME to any occurrence of unparse exp. The user specifies a new name, unparse st, and CYNTHIA carries out a global rename. More interesting is the command CHANGE TYPE. In general, when changing type from $T_1$ to $T_2$, CYNTHIA finds a mapping between the constructors of $T_1$ and those of $T_2$. In this example, CYNTHIA finds the mapping: $$\text{Var} \mapsto \{\text{Empty}\}, \text{Const} \mapsto \{\text{Assign}\}, \text{Op} \mapsto \{\text{Cond}, \text{While}, \text{Block}\}$$ Many possible mappings could have been found, but CYNTHIA restricts to mappings which map (non-)recursive constructors to (non-)recursive constructors. In addition, each constructor of type $T_2$ must have a pre-image. This guarantees that the new patterns produced by CHANGE TYPE are well-defined. Note how CYNTHIA produces a well-defined set of patterns for statement. CYNTHIA finds a similar mapping for the arguments of each constructor. In some cases, fresh variables may have to be introduced (e.g. the clause for Assign), or variables may be dropped (e.g. the clause for While). After the application of CHANGE TYPE, the definition of unparse st contains errors. CYNTHIA highlights these to the user in different colours. In this paper, boxes denote type errors and circles denote other semantic errors. The user may now use these annotations as a guide to finish the definition. Consider the While clause, immediately after CHANGE TYPE is applied. $$\text{unparse st (While}(s, e_1)) \Rightarrow \text{unparse st } e_1 \cdot \cdot \cdot \cdot \text{s}\cdot \cdot \cdot \cdot \text{unparse st } e_2$$ CYNTHIA tells the user that there are two errors here. By using CYNTHIA's type inspection facility, the user may highlight s and discover that the reason for the type error is that s has type exp. To rectify this, the user applies CHANGE TERM to replace the boxed occurrence of s with unparse exp s. e2 is circled because it is not declared. In response, the user invokes CHANGE TERM to replace it by e1. The expression now contains no errors but to give the correct result, the user replaces unparse st e1 by "while " and introduces "do ". The user may add further ML constructs by using the command ADD CONSTRUCT. The final stage of writing unparse st involves using this command twice — once to add a local variable declaration and once to add a conditional statement. The user specifies the parameters to \texttt{let val} and \texttt{if} and then uses \texttt{CHANGE TERM} to make any further modifications. \[ \text{datatype exp} = \text{Var} \text{ of } \text{string} | \text{Const} \text{ of } \text{string} | \text{Op} \text{ of } \text{exp} \times \text{string} \times \text{exp}; \] \[ \text{datatype statement} = \text{Empty} | \text{Assign} \text{ of } \text{string} \times \text{exp} | \text{Cond} \text{ of } \text{exp} \times \text{statement} \times \text{statement} | \text{While} \text{ of } \text{exp} \times \text{statement} | \text{Block} \text{ of } \text{statement} \times \text{statement}; \] \[ \text{statement} \rightarrow \text{string} \] \[ \text{unparse_st} \text{ (Assign}(t,e)) = t \] \[ \text{unparse_st} \text{ (Cond}(s,e_1,e_2)) = \text{unparse_st} \text{ e}_1 \times s \times \text{unparse_st} \text{ e}_2 \] \[ \text{unparse_st} \text{ (While}(s,e)) = \text{unparse_st} \text{ e}_1 \times s \times \text{unparse_st} \text{ e}_2 \] \[ \text{unparse_st} \text{ (Block}(e_1,e_2)) = \text{unparse_st} \text{ e}_1 \times \text{unparse_st} \text{ e}_2; \] \[ \text{CHANCE TERM (multiple times)} \] \[ \text{unparse_st} \text{ (Assign}(t,e)) = t \] \[ \text{unparse_st} \text{ (Cond}(s,e_1,e_2)) = \text{unparse_st} \text{ e}_1 \times s \times \text{unparse_st} \text{ e}_2 \] \[ \text{unparse_st} \text{ (While}(s,e)) = \text{unparse_st} \text{ e}_1 \times s \times \text{unparse_st} \text{ e}_2 \] \[ \text{unparse_st} \text{ (Block}(e_1,e_2)) = \text{unparse_st} \text{ e}_1 \times \text{unparse_st} \text{ e}_2; \] \[ \text{ADD CONSTRUCT (LET VAL),} \] \[ \text{ADD CONSTRUCT (IF),} \] \[ \text{CHANGE TERM} \] \[ \text{unparse_st} \text{ (Assign}(t,e)) = t \times \text{unparse_exp} e \] \[ \text{unparse_st} \text{ (Cond}(s,e_1,e_2)) = \text{let val (assignstring) = "if \text{unparse_exp} s \times \text{then} \times \text{unparse_st} e_1 \] \[ \text{in if e_2 = Empty then ss else ss \times else \times \text{unparse_st} e_2 \] \[ \text{end} \] \[ \text{unparse_st} \text{ (While}(s,e)) = \text{while \text{unparse_exp} s \times \text{do} \times \text{unparse_st} e_1 \] \[ \text{unparse_st} \text{ (Block}(e_1,e_2)) = \text{begin \text{unparse_st} e_1 \times \text{unparse_st} e_2 \times \text{end}; \] \[ \text{CYNTAIA} \text{ has other commands too. MAKE PATTERN replaces a variable by a number of patterns — one for each constructor of the datatype. In this way, arbitrarily complex patterns can be built-up and are guaranteed to be well-defined. ADD RECURSIVE CALL allows the user to construct functions with new recursion schemes. CYNTAIA keeps (and displays) a list of currently valid recursive calls — i.e. recursive calls which may be used in the program without compromising termination. The user may add to this by applying ADD RECURSIVE CALL and specifying a new recursive call. CYNTAIA then checks that this new call maintains the termination property and if so, makes it available during editing. For further details about CYNTAIA's editing commands, see [13].} 3 Representing ML Definitions as Proofs This section presents the underlying proof engine in CYNTHIA. Note that all the theorem proving is completely hidden from the user so that the user of CYNTHIA requires no specialised knowledge of logic or proof. We will use an ongoing example to illustrate the ideas — the representation of qsort, illustrated in Fig. 3.\(^1\) \[ \begin{align*} \text{fun partition f k nil } &= \text{nil} \\ \text{partition f k (h::t) } &= \text{if } f(h,k) \text{ then h::partition f k t} \\ &\quad \text{else partition f k t;} \end{align*} \] \[ \begin{align*} \text{fun qsort nil } &= \text{nil} \\ \text{qsort (h::t) } &= \text{qsort (partition (op <) h t)} @ [h] \\ &\quad @ \text{qsort (partition (op >=) h t));} \end{align*} \] Fig. 3. A Version of Quicksort. 3.1 Termination Analysis One of the main correctness guarantees provided by CYNTHIA is termination. Termination is in general undecidable. Hence, the usual approach is to provide the user with a pre-defined set of well-founded induction schemes. To use a scheme not specified in this set, the user must specify an ordering and prove that this ordering is well-founded. Since CYNTHIA is meant for programmers, not logicians, the user must not be expected to carry out such theorem proving. The difficulty in designing CYNTHIA then is to find a decidable subset of terminating programs that is large enough to include most definitions a (novice) ML programmer may want to create. The set of Walther Recursive functions [8] is such a set. CYNTHIA restricts the user to this set which includes primitive recursive functions over an inductively-defined datatype, multiple recursive functions, nested recursive functions and functions that reference previously defined functions in a recursive call, such as qsort. Walther Recursion assumes a fixed size ordering, with a semantics defined by the rules in Fig. 6. Intuitively, this ordering is defined as follows: \( w(c(u_1, \ldots, u_n)) = 1 + \sum_{i \in R_c} w(u_i) \) where \( c \) is a constructor and \( R_c \) is the set of recursive arguments of \( c \). In the case of lists, this measure is just length. There are two parts to Walther Recursion — reducer / conserver (RC) analysis and measure argument (MA) analysis. Every time a new definition is made, reducer / conserver lemmas are calculated for the definition. These place a bound \(^1\) :: is the ML cons operator for lists. @ is append. \(^2\) If \( c(u_1, \ldots, u_n) \) has type \( T \) then the recursive arguments of \( c \) are the \( i \) such that \( u_i \) also has type \( T \). A constructor is a step constructor if at least one of its arguments is recursive, and is a base constructor otherwise. on the definition based on the fixed size ordering. To guarantee termination, it is necessary to consider each recursive call of a definition and show that the recursive arguments decrease with respect to this ordering. Since recursive arguments may in general involve references to other functions, a measure decrease is guaranteed by utilising previously derived RC lemmas. The distinction between reducer and conserver lemmas is given as follows. First, define the semantics of the inequality operator. **Definition 1.** \( u \leq_w t \) if the following conditions hold: - If \( u \) is well typed then \( t \) is well typed. - If \( u \) is well typed then the top level constructor of \( u \) is either a base constructor or the same as the top level constructor of \( t \). - If \( u \) is well typed then the measure of \( u, w(u) \), is no larger than the measure of \( t, w(t) \). Define strict inequality in a similar way. **Reducer / Conserver Analysis** **Definition 2.** A function \( f \) is a reducer on its \( i \)th argument if \[ f x_1 \dots x_n <_w x_i \] and a conserver on its \( i \)th argument if \[ f x_1 \dots x_n \leq_w x_i \] To simplify the analysis, \(<_w\) can be eliminated by rewriting (1) as: \[ f x_1 \dots c_j(\ldots, r_{j,k}, \ldots) \dots x_n \leq_w r_{j,k} \] where \( c_j \) is a constructor and \( r_{j,k} \) is a recursive argument of \( c_j \). This means that only one form of inequality is ever present. RC analysis is done each time a definition is made. Consider \textit{partition}. It satisfies the conserver lemma: \[ \text{partition}\, f\, k\, z \leq_w z \] This is proved by the rules in Fig. 6 and induction. **Measure Argument Analysis** **Definition 3.** Given a function \( f \), defined over arguments \( x_1, \ldots, x_n \), the set of measure arguments is the set of \( i \) such that for every recursive call \( f\, u_1 \ldots u_n \) of \( f \), \( u_i \leq_w x_i \). 1. Find measure arguments, $M$, for $f$ by considering each $x_i$ in turn and applying the rules in Fig. 6; 2. if $M = \emptyset$, termination analysis fails. else for each recursive call, $f \, u_1 \ldots \, u_n$, try to find an $m \in M$ such that $u_m <_w x_m$ — i.e. if $x_m$ is a constructor term $c(\ldots, r_j, \ldots)$, we need $u_m \leq_w r_j$ for some $j$. if this can be done for all recursive calls, then $f$ terminates. else termination analysis fails. Fig. 4. Procedure for Checking Termination. Measure argument (MA) analysis involves showing that the measure decreases over each recursive call. To check for termination, the procedure in Fig. 4 is adopted. In attempting to derive $u_m <_w x_m$, it may be necessary to use previously defined RC lemmas. Consider \texttt{qsort}. In this example $M = \{1\}$, since \texttt{partition (op <) h t \leq_w t and partition (op >=) h t \leq_w t}. Since $t \leq_w h : t$, termination is proved. It is worth pointing out that for the measure argument analysis to guarantee termination, the function must be defined by a well-defined pattern. In [8], Walther Recursion was described for a small functional language with a syntax and semantics different to that of ML. We made extensions to encompass the subset of ML supported by \texttt{CYNTHIA}. The major changes were: - In the language in [8] definitions are made using destructors. It is more natural to use constructors in ML. Therefore, the rules were recast in constructor-fashion. - McAllester suggests a forward application of the rules. \texttt{CYNTHIA} is based on a backwards style so our system sets up subgoals for each possible lemma and then applies the rules in a backwards fashion. - A function defined by an exhaustive pattern cannot be a reducer because the measure of the base case argument cannot be reduced. McAllester forces the user to make an additional definition, restricted to non-base-cases. It is naive to expect programmers to go through this process of making additional definitions. A better solution is to place side-conditions on reducer lemmas that rule out base cases. This allows the user to write definitions as normal. - [8] does not include ML \texttt{case} expressions or local function declarations. It does allow local variable declarations but only of the form $\texttt{dec = exp}$ where $\texttt{dec}$ is a variable. In \texttt{CYNTHIA} $\texttt{dec}$ may be a pattern. ### 3.2 Specifications Each ML function is represented by a proof with specification (i.e. top-level goal) that is precisely the type of the function along with lemmas required for termination analysis. In general, such specifications may specify arbitrarily complex behaviour about the function. However, \texttt{CYNTHIA} specifications are deliberately rather weak so that the theorem proving task can be automated. CYNTHIA specifications are defined as follows. **Definition 4.** A CYNTHIA specification of an ML function is of the form: \[ P : (\forall z_1 : T_1. \ldots \forall z_n : T_n. (f \ z_1 \ldots \ z_n) : T_0 \wedge \ldots \wedge (f \ z_1 \ldots \ z_n) \leq_w z_i \wedge (f \ z_1 \ldots \ z_n) \leq_w z_i \wedge (f \ z_1 \ldots c_{j_1} (\ldots , r_{j_1, i_1} , \ldots ) \ldots z_n) \leq_w r_{j_1, i_1} \wedge \ldots \wedge (f \ z_1 , \ldots c_{j_1} (\ldots , r_{j_1, i_1} , \ldots ) \ldots z_n) \leq_w r_{j_1, i_1}) \] where: - \( f \) represents the name of the function\(^3\); - \( T_1 \rightarrow \ldots \rightarrow T_n \rightarrow T_0 \) is the type of the function; - \( P \) is a variable representing the definition of the ML function. \( P \) gets instantiated as the inference rules are applied. A complete proof instantiates \( P \) to a complete program. This is a standard approach to extracting programs from proofs; - \( c_{j_1}, \ldots , c_{j_r} \) are constructors; - \( n_1, \ldots , n_r \in \{1, \ldots , n\} \). The first part of the specification merely states the existence of a function of type \( T_1 \rightarrow \ldots \rightarrow T_n \rightarrow T_0 \). Clearly, there are an infinite number of proofs of such a specification. The particular function represented in the proof is given by the user, however, since each editing command application corresponds to the application of a corresponding inference rule. In addition, many possible proofs are outlawed because the proof rules (and corresponding editing commands) have been designed in such a way as to restrict to certain kinds of proofs, namely those that correspond to ML definitions. The second part of the specification states RC lemmas that hold for the function. In the example, the specification for `partition` is: \[ P : (\forall z_1 : (\text{int} * \text{int} \rightarrow \text{bool}). \forall z_2 : \text{int}. \forall z_3 : \text{int list}. \] \[ (f \ z_1 z_2 z_3) : \text{int list} \wedge (f \ z_1 z_2 z_3) \leq_w z_3) \] CYNTHIA specifications are in fact dynamic — in the sense that as edits are applied, the specification may be changed to reflect the modifications. ### 3.3 Inference Rules Each ML function definition is represented by a proof of the relevant specification. There are three kinds of inference rules used in these proofs. Fig. 5 gives rules that mirror the structure of the ML definition. Each program construct has a corresponding inference rule. When the user introduces a construct using the editing commands, the appropriate inference rule is applied to the current goal in the proof. Fig. 5 omits the rules for the ML constructs `fn` and `case` — see [14]. As each rule is applied, the variable which represents the program (\( P \) in (5)) is gradually instantiated. Rules are written in sequent calculus fashion. \textsc{Witness} is similar to the usual \( \exists \text{R} \) rule. \textsc{Let Fun} introduces a local function into the program. In proof terms, this corresponds to a lemma stating the existence of a function \( f \) of type \( T_1 \rightarrow \ldots \rightarrow T_n \rightarrow T_0 \) satisfying certain RC lemmas. \textsc{Ind} is a super-rule setting up an induction corresponding to the recursion in the program and also setting up an induction to show the termination of this recursion scheme. \( a_1, \ldots, a_n \) are base cases. \( u_1, \ldots, u_n \) are therefore non-recursive arguments. For the sake of clear presentation, each constructor \( c_h \) is restricted to have only one argument. \( a_{ij}, \ldots, a_{kn} \) are step cases. Each \( v_{ij} \) is a recursive argument. Again, we restrict to just two arguments. There are two things going on with the \textsc{Ind} rule. Firstly, subgoals are set up to carry out measure argument analysis — i.e. check that the recursive calls \( R_{aij} \) are measure decreasing. This is true as long as each \( R_{aij} \) measure preserving on a strict subexpression of the pattern over which recursion is defined. Secondly, \textsc{Ind} carries out an induction to show that the RC lemmas in the specification hold. The induction scheme is based on the patterns over which the ML function is defined. For each pattern \( c_h (v_{i1}, v_{i2}) \), the induction hypotheses state that the property \( A \) holds for \( v_{i1} \) and \( v_{i2} \). Once a proof is completed, the ML program represented by it can be extracted easily. For rules \textsc{Witness}, \textsc{If}, \textsc{Let Val} and \textsc{Let Fun}, the extract is precisely the instantiation of \( P \). For \textsc{Ind}, we need a simple translation from the \textit{ind} function to an ML function definition using patterns. The second kind of rules are rules for type-checking and checking that a term inhabits \( \Sigma \). The third kind are rules for Walther Recursion analysis. These are given in Fig. 6. \textsc{WSubst} is needed to make substitutions of local variables. The equality on the LHS, below the line, is introduced by the \textsc{Let Val} rule. \textsc{Cynthia} actually includes a more general version of \textsc{WSubst} where equalities of the form \( (x_1, x_2) = (u_1, u_2) \) are decomposed into \( x_1 = u_1 \) and \( x_2 = u_2 \). An example of rule application may be illustrative. Consider the \texttt{Partition} example again. After the usual \textsc{R} rule has been applied to the specification a number of times, the goal looks like: \[ \begin{align*} z_1 &: (\texttt{int} \times \texttt{int} \rightarrow \texttt{bool}), z_2 &: \texttt{int}, z_3 &: \texttt{int} \rightarrow \texttt{list} \vdash \\ \vdash & P_1 : ((f z_1 z_2 z_3) : \texttt{int} \rightarrow \texttt{list} \land (f z_1 z_2 z_3) \leq_w z_3) \end{align*} \] where \( P \) has been instantiated to \( \lambda z_1, \lambda z_2, \lambda z_3. P_1. \) \textsc{Ind} now applies. In this case, the form of the \textsc{Ind} rule used is as follows: \[ \begin{align*} H & \vdash a_h : ((f z_1 z_2 \texttt{nil}) : \texttt{int} \rightarrow \texttt{list} \land (f z_1 z_2 \texttt{nil}) \leq_w \texttt{nil}) \\ H, h : \texttt{int}, t : \texttt{int} \rightarrow \texttt{list}, (f z_1 z_2 t) & : \texttt{int} \rightarrow \texttt{list}, X_1 : (f z_1 z_2 t) \leq_w t \\ \vdash & a_{ij} : ((f z_1 z_2 (h \vdash t)) : \texttt{int} \rightarrow \texttt{list} \land (f z_1 z_2 (h \vdash t)) \leq_w (h \vdash t) \land t \leq_w t) \\ H, z_3 &: \texttt{int} \rightarrow \texttt{list} \vdash (\texttt{ind}(z_3, a_h, \lambda h. \lambda X_1. a_s (t))) : (f z_1 z_2 z_3) \land (f z_1 z_2 z_3) \leq_w z_3 \end{align*} \] This rule mirrors the structure of the patterns in the definition of \texttt{Partition} — i.e. there is a case for \texttt{nil} and a case for \texttt{h : t}. It checks that the recursive call is measure decreasing \( t \leq_w t \). It also tries to prove the RC lemma by induction. \[ H \vdash t : T_0 \] \[ H \vdash t \in \Sigma \] \[ H \vdash A \bullet \{ t/f(x_1 \ldots x_n) \} \] \[ \text{WITNESS} \] \[ H \vdash t : (f(x_1 \ldots x_n) : T_0 \land A) \] \[ H \vdash e_1 : \text{bool} \] \[ H, X : e_1 \vdash e_2 \vdash A \] \[ H, X : \neg e_1 \vdash e_2 : A \] \[ H \vdash e_1 \in \Sigma \] \[ H \vdash (\text{if } e_1 \text{ then } e_2 \text{ else } e_3) : A \] \[ H \vdash e_1 : T \] \[ H \vdash e_1 \in \Sigma \] \[ H, v : T, X : (v = e_1) \vdash e_2 : A \] \[ H \vdash (\text{let val } (v : T) = e_1 \text{ in } e_2 \text{ end}) : A \] \[ H \vdash e_1 : (\forall v_1 : T_1, \ldots, v_n : T_n. (f v_1 \ldots v_n) : T_0 \land (f v_1 \ldots v_n) \leq w v_{i_k} \land \ldots \land (f v_1 \ldots c_j \ldots v_{j_k} \ldots v_n) \leq w v_{j_k}) \] \[ H, v_1 : T_1, \ldots, v_n : T_n, f : (T_1 \rightarrow \cdots \rightarrow T_n \rightarrow T_0). (f v_1 \ldots v_n) \leq w v_{i_k}, \ldots \] \[ (f v_1 \ldots c_j \ldots v_{j_k} \ldots v_n) \leq w v_{j_k} \] \[ H \vdash (\text{let fun } f (v_1 : T_1) \ldots (v_n : T_n) = (e_1 : T_0) \text{ in } e_2 : A) \] \[ H, u_1 : \psi(c_{b_1}, 1) \vdash a_{b_1} : (f(c_{b_1}(u_1)) : T_0 \land A(c_{b_1}(u_1))) \] \[ \vdots \] \[ H, u_n : \psi(c_{b_n}, 1) \vdash a_{b_n} : (f(c_{b_n}(u_n)) : T_0 \land A(c_{b_n}(u_n))) \] \[ H, v_{11} : \psi(c_{s_1_1}, 1), v_{12} : \psi(c_{s_1_2}, 2), f(R_{s_1}) : T_0, \ldots, f(R_{s_p}), : T_0, \] \[ X_{11} : A(v_{11}), X_{12} : A(v_{12}) \vdash \] \[ a_{s_1} : (f(c_{s_1_1}(v_{11}, v_{12}))) : T_0 \land A(c_{s_1_1}(v_{11}, v_{12})) \land \] \[ (R_{s_1_1} \leq w v_{11} \lor R_{s_1_2} \leq w v_{12}) \land \ldots \land (R_{s_p_1} \leq w v_{11} \lor R_{s_p_2} \leq w v_{12}) \] \[ \vdots \] \[ H, v_{n_1} : \psi(c_{s_n_1}, 1), v_{n_2} : \psi(c_{s_n_2}, 2), f(R_{s_n}) : T_0, \ldots, f(R_{s_p}), : T_0, \] \[ X_{11} : A(v_{n_1}), X_{12} : A(v_{n_2}) \vdash \] \[ a_{s_n} : (f(c_{s_n_1}(v_{n_1}, v_{n_2}))) : T_0 \land A(c_{s_n_1}(v_{n_1}, v_{n_2})) \land \] \[ (R_{s_n_1} \leq w v_{n_1} \lor R_{s_n_2} \leq w v_{n_2}) \land \ldots \land (R_{s_p_1} \leq w v_{n_1} \lor R_{s_p_2} \leq w v_{n_2}) \] **IND** \[ H, L : B \vdash (\text{ind}(L, \lambda u_1 a_{b_1}, \ldots, \lambda u_n a_{b_n}, \lambda v_{11} \lambda v_{12}, \lambda x_{11} \lambda x_{12} a_{s_1}(R_{s_1_1}, \ldots, R_{s_p_1}), \ldots, \lambda v_{n_1} \lambda v_{n_2} \lambda x_{n_1} \lambda x_{n_2} a_{s_n}(R_{s_n_1}, \ldots, R_{s_p_n}))) : (f(L) : T_0 \land A(L)) \] \[ t : T \] \[ t \in \Sigma \] \[ \psi(c, n) \] \[ f(X) \] \[ R_{s_{i_j}} \] \[ L \] Fig. 5. Structure Rules for C^NTHIA (1). By applying \texttt{wrefl}, \( P_2 \) is instantiated to: \[ \text{ind}(\lambda z. a_6, \lambda h. \lambda t. \lambda X_1. a_{s_1}(t)) \] The \texttt{ind} rule gives rise to two subgoals. Consider the base case first: ...\( \vdash a_6 : ((f z_1 z_2 \text{ nil}) \text{ list } \land (f z_1 z_2 \text{ nil}) \leq \text{ nil}) \) The base case continues by applying \texttt{witness} where \( a_6 \) is instantiated to \texttt{nil}. This instantiation is in general provided by the user and is the one used here because it is the result in the base case in the definition of \texttt{partition}. \texttt{Witness} gives us three subgoals: ...\( \vdash \text{ nil } : \text{ list } \) ...\( \vdash \text{ nil } \in \Sigma \) ...\( \vdash \text{ nil } \leq \text{ nil} \) The first two subgoals are proved easily using tactics for type-checking and semantics-checking respectively. The third is proved using \texttt{wrefl}. The step case subgoal is as follows: \[ H, h : \text{ int } : \text{ list }, (f z_1 z_2 t) : \text{ int list }, X_1 : (f z_1 z_2 t) \leq t \] \[ \vdash a_{s_1}(f z_1 z_2 (h :: t)) : \text{ int list } \land (f z_1 z_2 (h :: t)) \leq (h :: t) \land t \leq t \] Instantiating \( a_{s_1} \) to if \( z_1(h,z_2) \) then \( E_2 \) else \( E_3 \), we can apply if. This gives four subgoals. Type-checking and semantics-checking are done easily. The other two subgoals correspond to each branch of the conditional split. Let us consider the first branch only. The subgoal in this branch is: ...\( X : z_1(h,z_2), (f z_1 z_2 t) : \text{ int list }, X_1 : (f z_1 z_2 t) \leq t \) \[ \vdash E_2 : ((f z_1 z_2 (h :: t)) : \text{ int list } \land (f z_1 z_2 (h :: t)) \leq (h :: t) \land t \leq t) \] Now we apply witness, instantiating $E_2$ to $h :: (f \, z_1 \, z_2 \, t)$. Again, type-checking and semantics-checking are dealt with easily. The remaining subgoal is: $$\ldots, X : z_1(h, z_2), (f \, z_1 \, z_2 \, t) : \text{int list}, X_1 : (f \, z_1 \, z_2 \, t) \leq_w t$$ $$\vdash (h :: (f \, z_1 \, z_2 \, t)) : \text{int list} \land \leq_w (h :: (f \, z_1 \, z_2 \, t) \land t \leq_w t)$$ There are three conjuncts to prove. The first is trivial. The second needs to be proved using the rules for Walther Recursion and an induction hypothesis. First, apply $\text{wcons}$2. This gives the subgoal: $$\ldots \vdash (f \, z_1 \, z_2 \, t) \leq_w t$$ which is proved by the induction hypothesis. The third conjunct is easily proved using WREFL. The second branch of the conditional statement can be proved similarly. Collecting together all the instantiations, $P$ has been instantiated to: $$\lambda z_1.\lambda z_2.\lambda z_3.\lambda X_1.\text{ind}(z_3, \text{nil}, \lambda h.\lambda t.\lambda X_1.\text{if} \, z_1(h, z_2) \, \text{then} \, h :: (f \, z_1 \, z_2 \, t) \, \text{else} \, (f \, z_1 \, z_2 \, t))$$ A simple translation, along with a mechanism for keeping track of variable names, gives the program partition. ### 3.4 Replaying Proofs According to User Edits When the user applies an editing command to the current program, CYNTHIA must apply a corresponding edit to the current synthesis proof. Typically, this edit will make an isolated change to the proof. CYNTHIA's replay mechanism then propagates this change through to the rest of the proof. #### Definition 5. The Abstract Rule Tree (ART) of a proof is the tree of rule applications, where the hypotheses list, goal etc. have been omitted. The procedure for editing the proof is as follows. The user highlights the position in the program where he wishes to make a change. CYNTHIA calculates the corresponding position, $pos$, in the proof tree. Let the synthesis proof be denoted by $P_i$ and the proof subtree below $pos$ by $P_s$. CYNTHIA abstracts $P_s$ into an ART $A_s$. CYNTHIA then makes changes to $A_s$ to give $\phi(A_s)$. $\phi(A_s)$ is then unabstracted or replayed to give the new proof subtree $\phi(P_s)$. The complete new proof tree is then $P_i$ with $P_s$ replaced by $\phi(P_s)$. Note that CYNTHIA abstracts only $P_s$ and not the whole proof tree $P_i$. This saves effort because, due to the refinement nature of the proofs, any rules not in $P_s$ will be unaffected. Some commands also require a change to the specification. For example, ADD CURRIED ARGUMENT adds an additional type to the specification. The replay of the ART is the main method for propagating changes throughout the proof. The ART captures the dependencies between remote parts of the program and the replay of the ART updates these dependencies in a neat and flexible way. Changes to the program will mean that some of the previous subproofs no longer hold. In some cases, the system can produce a new proof. However, it may be that a subgoal is no longer true. Such subgoals correspond directly to errors in the program. The replay of the ART is a powerful mechanism for identifying program errors and highlighting them to the user. During the replay, if a rule no longer holds, a gap will be left in the proof. This corresponds to a position in the ML program and so the program fragment corresponding to where the proof failed can be highlighted to the user. This failed-proof rule usually denotes a type error or other kind of semantic error (e.g., unbound variable). Various optimizations have been implemented to improve the efficiency of the ART replay. Correctness-checking rules can be time-consuming and so CYNTHIA selectively replays these rules. CYNTHIA automatically decides which correctness-checking rules need to be replayed according to which editing command was applied. As an example, consider type-checking rules. In some cases, expressions within the ML program will not need to be type-checked during the replay. Consider applying the ADD CONSTRUCT command to introduce a conditional if then else statement into the program. This will copy the highlighted expression, E, to each branch of the condition to give: if C then E else E where E has been copied. Clearly, there is no point type-checking E during the replay as its status will be unchanged. Some commands will require that E is type-checked, however. If CHANGE TYPE is used to change the top-level signature, then the target synthesis proof may require E to inhabit some new type. We must apply type-checking to see if this holds. 4 Why Use Proofs? The use of proofs to represent ML programs is a flexible framework within which to carry out various kinds of analyses of the programs. The idea for CYNTHIA grew out of work on the recursion editor [2], an editor for Prolog that only allows terminating definitions. The recursion editor was severely restricted, however, to a much smaller class of terminating programs. It also had CYNTHIA-like transformations but these were stored as complex rewrite rules, the correctness of which had to be checked laboriously by hand. The use of a proof to check correctness eliminates the possibility of error in such soundness-checking. The use of a proof is a natural way to provide detailed feedback on program errors. When an editing command is applied, any errors correspond directly to failed proof obligations. No extra effort is required to look for new errors — the edit is just applied and then the proof is replayed as far as possible. In addition, CYNTHIA provides a framework for carrying out more sophisticated analysis than is done at present. This could be done by expressing additional properties in the specification of the proof. Clearly, the proof of such specifications could be arbitrarily hard, but the proofs could still be done automatically if only certain properties or restrictions were considered and proof strategies for these were implemented. CYNTHIA could also be extended to incorporate optimizing transformations such as those in the KIDS [12] system. The proof framework is also a very natural one for this purpose. 5 Evaluating \textit{CYNTHIA} \textit{CYNTHIA} has been successfully evaluated in two trials at Napier University. The first trial involved a group of 40 postgraduates learning ML as part of a course in Formal Methods. The second trial involved 29 Computer Science undergraduates. Full results of these trials can be found in [14]. Although some semi-formal experiments were undertaken, most analysis was done informally. However, the following trends were noted: - Students make fewer errors when using \textit{CYNTHIA} than when using a traditional text editor. - When errors are made, users of \textit{CYNTHIA} locate and correct the errors more quickly. This especially applies to type errors. - \textit{CYNTHIA} discourages aimless hacking. The restrictions imposed by the editing commands mean that students are less likely, after compilation errors, to blindly change parts of their code. - \textit{CYNTHIA} encourages a certain style of programming. This style is generally considered to be a good starting point for learning functional programming. The editing commands correspond to FP concepts and hence discourage, for example, attempts to program procedurally. 6 Related Work Proofs-as-programs seems to be a good framework for designing correctness-checking editors. Another possible framework is that of attribute grammars [1, 10], which attach annotations to a language’s grammar so that properties can be propagated throughout the abstract syntax tree. Proofs-as-programs wins in two main ways. First, proofs-as-programs gives a sounder theoretical underpinning. The correctness of programs in \textit{CYNTHIA} comes from the underlying proof. The soundness of the proof rules is easy to check. In contrast, however, it would be a massive, if not impossible, undertaking to check the correctness of an attribute grammar implementing a \textit{CYNTHIA}-like editor. Second, proofs-as-programs seems more suited for functional programming. The proof structure localises the relevant parts of the program — for instance, an induction rule encapsulates the kind of recursion. This means that information is localised rather than being spread across the grammar. No ML editors have been produced using attribute grammars. A couple of other ML editors have recently become available, however. MLWorks [5] and CtCaml [11] have different objectives than \textit{CYNTHIA}. MLWorks is an integrated environment for ML with no structure-editing facilities or advanced correctness-checking. CtCaml is a structure editor for ML. Its structure editing is primitive, however, in contrast to \textit{CYNTHIA}’s specially designed commands. \textit{CYNTHIA} offers incremental correctness-checking whereas MLWorks users must compile their programs to receive feedback. 7 Conclusions This paper has presented CYNTHIA, a novel environment for writing ML programs, primarily aimed at novices. The user writes ML programs by applying correctness-preserving editing commands to existing programs. Each ML definition is represented as the proof of a simple specification which guarantees various aspects of correctness, including termination. The use of an underlying proof provides a sound framework in which to analyse and provide feedback on users’ programs. The proof checking is fully automatic and hidden from the user. CYNTHIA has been successfully tested on novice ML students. References
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/6321675/An_ML_Editor_based_on_Proofs_as_Programs.pdf", "len_cl100k_base": 11181, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 58421, "total-output-tokens": 12959, "length": "2e13", "weborganizer": {"__label__adult": 0.00029754638671875, "__label__art_design": 0.0003535747528076172, "__label__crime_law": 0.0002675056457519531, "__label__education_jobs": 0.0018367767333984375, "__label__entertainment": 7.444620132446289e-05, "__label__fashion_beauty": 0.0001329183578491211, "__label__finance_business": 0.000186920166015625, "__label__food_dining": 0.0003542900085449219, "__label__games": 0.0005717277526855469, "__label__hardware": 0.0005779266357421875, "__label__health": 0.0003261566162109375, "__label__history": 0.0001875162124633789, "__label__home_hobbies": 0.0001068711280822754, "__label__industrial": 0.00033473968505859375, "__label__literature": 0.0003485679626464844, "__label__politics": 0.0002231597900390625, "__label__religion": 0.0004661083221435547, "__label__science_tech": 0.01038360595703125, "__label__social_life": 9.91225242614746e-05, "__label__software": 0.004352569580078125, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.00027823448181152344, "__label__transportation": 0.0004551410675048828, "__label__travel": 0.00016427040100097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41849, 0.01801]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41849, 0.55433]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41849, 0.83499]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2573, false], [2573, 4838, null], [4838, 8120, null], [8120, 11193, null], [11193, 13913, null], [13913, 15857, null], [15857, 18657, null], [18657, 21394, null], [21394, 25562, null], [25562, 28120, null], [28120, 29834, null], [29834, 32829, null], [32829, 35996, null], [35996, 38770, null], [38770, 41849, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2573, true], [2573, 4838, null], [4838, 8120, null], [8120, 11193, null], [11193, 13913, null], [13913, 15857, null], [15857, 18657, null], [18657, 21394, null], [21394, 25562, null], [25562, 28120, null], [28120, 29834, null], [29834, 32829, null], [32829, 35996, null], [35996, 38770, null], [38770, 41849, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41849, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41849, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2573, 2], [2573, 4838, 3], [4838, 8120, 4], [8120, 11193, 5], [11193, 13913, 6], [13913, 15857, 7], [15857, 18657, 8], [18657, 21394, 9], [21394, 25562, 10], [25562, 28120, 11], [28120, 29834, 12], [29834, 32829, 13], [32829, 35996, 14], [35996, 38770, 15], [38770, 41849, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41849, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
209c5175d3d8f7a854321e4c5a94ad874f2dc688
Circumventing Refactoring Masking using Fine-Grained Change Recording Quinten David Soetens, Javier Pérez and Serge Demeyer University of Antwerp Antwerpen, Belgium {quinten.soetens; javier.perez; serge.demeyer}@uantwerp.be Andy Zaidman Delft University of Technology Delft, The Netherlands a.e.zaidman@tudelft.nl ABSTRACT Today, refactoring reconstruction techniques are snapshot-based: they compare two revisions from a source code management system and calculate the shortest path of edit operations to go from the one to the other. An inherent risk with snapshot-based approaches is that a refactoring may be concealed by later edit operations acting on the same source code entity, a phenomenon we call refactoring masking. In this paper, we performed an experiment to find out at which point refactoring masking occurs and confirmed that a snapshot-based technique misses refactorings when several edit operations are performed on the same source code entity. We present a way of reconstructing refactorings using fine grained changes that are recorded live from an integrated development environment and demonstrate on two cases —PMD and Cruisecontrol— that our approach is more accurate in a significant number of situations than the state-of-the-art snapshot-based technique RefFinder. Categories and Subject Descriptors D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement Keywords Refactoring Reconstruction; Refactoring Masking; Fine Grained Changes; Software Evolution 1. INTRODUCTION Refactoring is widely recognised as a crucial technique applied when evolving object-oriented software systems. The key idea is to redistribute program entities and responsibilities in order to prepare the software for future extensions. If applied well, refactoring improves the design of software, makes software easier to understand, helps to find bugs, and helps to program faster [10]. As such, refactoring has received widespread attention within both academic and industrial circles, and is mentioned as a recommended practice in the software engineering body of knowledge [1]. Given this widespread attention, several researchers set out to reconstruct refactorings as they occurred in the evolution of software projects. Initially, this was mainly an act of scientific curiosity (i.e., [3, 21, 37, 39, 27, 14]), however later on actual applications emerged. Weißgerber et al. for instance used this as a means for studying the impact of refactorings on defects [11, 38]. Dig et al. prototyped a capture-playback tool capable of replaying refactorings when migrating systems dependent on a refactored API [13, 7, 8]. Obviously, several authors tried to correlate the impact of refactorings on the maintainability of a software project [32, 16, 22, 36]. In the meantime, several field studies and surveys indicated that if refactoring is applied in practice, it is mainly interwoven with normal software development [17, 15]. A side effect of this interweaving is that a commit in a source code management system tends to consist of more than just a single refactoring [2]. Indeed Negara et al. reported that 46% of refactored program entities are also edited in the same commit [19]. Consequently, state of the art refactoring reconstruction techniques miss a significant portion of the actual refactorings, because they infer refactorings by comparing two revisions of a system and making educated guesses about the precise edit operations applied in between. At that point, it is virtually impossible for such snapshot-based refactoring reconstruction tools to correctly deduce refactorings since these may be concealed by other changes. Negara et al. found that on average 30% of refactoring operations do not reach the version control system [18]. We call this the “refactoring masking” phenomenon and investigate the nature of the problem in a first Research Question. RQ 1 – Refactoring Masking. Under which conditions does a snapshot-based approach fail to reconstruct refactorings? To address this first research question we followed the refactoring script of a small yet representative program (the LAN Simulation [6]) and committed individual atomic changes to separate revisions in a source code repository. We then ran RefFinder [14] to compare all possible combinations of revisions to investigate under which conditions RefFinder fails to reconstruct the refactorings. We found that some refactorings indeed conceal others as they act on the same source code entities. Combinations of EXTRACTMETHOD and MOVEMETHOD are particularly vulnerable. A solution to this problem might be to use the actual changes, as performed in an integrated development environment. Assuming that an integrated development environment provides logging facilities for all editing operations (i.e., like done in Spyware [25], Syde [12], Cheops [9], OperationReplayer [20] and ChEOPSJ [33]), we might query this stream of changes to distinguish refactorings from ordinary program edits. We performed a proof by construction via a tool prototype named ChEOPSJ, which sits in the background of Eclipse and records the changes made to a software system while a developer is programming. We compared this tool prototype against the state of the art snapshot-based approach as explained by the second research question. RQ 2 – Comparison. Do fine-grained changes allow us to reconstruct refactorings where snapshot-based approaches fail? We compare the change-based approach (exemplified by ChEOPSJ [33]) against a snapshot-based approach (exemplified by RefFinder [14]) on two open source cases – PMD and Cruisecontrol. We locate instances of the refactoring masking phenomenon and show that the change-based approach is indeed more accurate in reconstructing refactorings in those cases. Moreover, we argue that this improved accuracy is relevant by estimating the number of edit operations acting on the same source code entities within 5 minutes after an ExtractMethod refactoring. We structured the remainder of this paper as follows. Section 2 introduces the state of the art, including a description of the ChEOPSJ tool prototype. Next we have two sections 3, and 4, which each address one of the research questions with their own experimental setup and results. The final two sections 5 and 6 wrap up the paper with a discussion of the limitations of – and threats to – the validity of our research and summarise our major findings in the conclusions. 2. STATE OF THE ART 2.1 Snapshot-based Reconstruction In this paper, we use the term “refactoring reconstruction” to refer to any software reengineering technique used to infer refactorings that were performed in the history of a software system. The current state of the art in this consists of analyses of snapshots maintained in a source code repository. Most approaches use some kind of code similarity measure to identify possible refactoring candidates. Dig et al. as well as Weißgerber et al. used a combination of a signature-based analysis and shingles (a form of hashing) [7, 8, 11, 38]. Van RysSELBERGHE et al. use clone detection on two versions to look for a decrease in the number of clones. Since many refactorings are aimed at the elimination of duplicated code [37], this would suggest that a refactoring was performed. There exist two approaches that do not rely on code similarity. Demeyer et al. developed a set of heuristics to identify refactorings using decreasing code entities [3]. Xing and Stroulia search for refactorings at the design level using their UMLDiff algorithm, which is capable of detecting some basic structural changes to the system [39] [27]. RefFinder, an Eclipse plugin by Kim et al. is to detect the most comprehensive refactoring reconstruction tool as it supports 63 different types of refactorings [14]. They use the technique proposed by Prete et al., which is stronger than all previous techniques because they not only detect primitive refactorings (which all previous techniques do to some extent) Figure 1: The types of changes implemented. but also “complex refactorings” (i.e., refactorings which are combinations of primitive refactorings). To do this they rely on a fact base with a strong query engine (Tyruba logic) [21]. They describe the structural constraints before and after applying a refactoring in terms of template logic queries. RefFinder takes two versions of a system as input from the Eclipse workspace and recovers changes as logic facts about the systems’ syntactic structure using LSDiff. These are then stored in a factbase, which can be queried to identify program differences that match the constraints of each refactoring type under focus. We opted to use RefFinder in our experiments, because it is a representative of the state-of-the art in snapshot-based refactoring reconstruction techniques and because the list of refactorings it is able to detect is currently the most comprehensive. 2.2 Change-based Reconstruction An alternative to the snapshot-based approach is to use the actual edit operations as performed in an integrated development environment. In such an approach a tool silently records the activities of the programmers while they are working, and registers all the changes as performed. For instance, if the programmer modifies a method, the recorder instantiates change objects for each of the statements that were added, changed or removed. This approach was used by Robbes and Lanza in Spyware [25]; by Hattori and Lanza in Syde [12]; by Omori and Maruyama in OperationReplayer [20]; and by Ebraert et al. in ChEOPSJ [9]. We extended the later approach with our tool prototype ChEOPSJ\(^1\), which is a Java version of the same model [34, 33]. It operates in the Eclipse background and silently records the \(^1\)ChEOPSJ: Change and Evolution Oriented Programming Support for Java (http://vin.ua.ac.be/~qsoeten/other/cheopsj/) In our tool we implemented two kinds of Atomic Changes: Add and Remove (see Figure 1). These act upon a Subject that represents an actual entity in the source code. For these subjects we implemented a subset of the FAMIX model [5] (see Figure 2). We chose the FAMIX model as this model captures most object oriented programming languages. It defines entities representing packages, classes, methods and attributes, as well as more fine grained entities such as method invocations, variable accesses and inheritance relationships. In our change model the changes are interconnected through dependencies. These dependencies between change objects (See Figure 1) are determined by the relationships between the entities in the subset of the FAMIX model we are using (See Figure 2). Hence, the dependencies between change objects are defined as follows: A change $c_1$ is said to depend on another change $c_2$ if the application of $c_1$ without $c_2$ would violate the system invariants. For instance, an addition of a method depends on the addition of a class as you cannot add a method to a nonexistent class. As such a software system and its entire evolution is represented as a graph with the changes as nodes and the dependencies between the changes as edges. Once the sequence of changes and their dependencies is recorded, we use Groove [24], a graph transformation tool, to search the change graph for pre-defined patterns corresponding to a refactoring. We chose Groove because it uses a simple XML format to store their graphs, as such it was easy to export the change graphs from ChEOPSJ into a Groove readable format. Besides that, Groove offers a fast and scalable state space exploration so it should be able to find our refactoring patterns on large graphs relatively quickly. As an example, we briefly describe how we reconstruct a PullUpMethod refactoring from a graph of changes. The other refactorings are defined in a similar way and are published on figshare [29]. The Groove graph transformation is shown in Figure 3. The top of this pattern describes how the classes are related. The class from which the method is removed needs to be different from the class in which we added a method. The class node on the left should be a descendant to the class on the right. We express this relationship using a regular expression (-superclass.subclass)+, meaning that we traverse the edge in the opposite direction (with -superclass) from the superclass node to the implicit Inheritance node, and then traverse the edge in the normal direction (subclass) to the subclass node. Adding the + makes this the transitive closure, meaning that we can trace this edge to any descendant class in the inheritance tree of the superclass. The bottom half of the pattern describes the changes to the methods. In the subclass the method has (at least) two changes: an addition (which is dependent on the addition of the class) and a removal which is dependent on the addition of the method. In the superclass the method has (at least) one change: an addition. Moreover the method in the subclass and the superclass should have an identical name. The left side of the pattern (the subclass and its method along with the additions of both and the removal of the method) have a universal quantifier ($\forall$) meaning that this pattern applies to all subgraphs of this kind. In other words, for each instance of a removal of the method in a subclass, this removal is part of the PullUpMethod refactoring reconstructed. This graph transformation rule adds a new node – PulledUpMethod – linked to (i) the changes that remove instances of the method in subclasses (ii) the change that adds the method to the superclass. However this should only be done if these changes are not already linked to a previously reconstructed refactoring node. We argued the feasibility of a change-based approach for refactoring reconstruction in a previous paper where we had implemented a way for detecting MoveMethod and RenameMethod [35]. For this paper, we extended this proof by construction to incorporate 11 of the refactoring rules that can be expressed on the model in Figure 1 and Figure 2. - PullUpMethod - PullUpField - PushDownMethod - PushDownField - MoveClass - MoveField - RenamePackage - RenameClass - RenameMethod - RenameField - MoveMethod With this list, we have a sufficient basis to compare against the state-of-the-art snapshot-based tool RefFinder [14]. 3. REFACTORING MASKING In this section we address RQ 1: Under which conditions does a snapshot-based approach fail to reconstruct refactorings? We illustrate the refactoring masking phenomenon, by using the state of the art tool RefFinder [14] on a small yet realistic system: the LAN Simulation [6]. This is a script of refactorings that are performed on a small system. It is mostly used as a teaching lab to teach how and why to refactor. 3.1 Experimental Setup We followed the script of the LAN Simulation [6] and injected some non-refactoring changes along the way. After every atomic change we committed revisions to a local subversion repository. For these commits, we handled the same level of granularity (method level changes) as the model in our change recording tool shown in Figure 2. For instance when performing a MoveMethod refactoring, we executed a simple copy, paste and delete and then updated the signature and the invocations. This resulted in at least 5 commits to the repository: after the copy; after the paste; after the delete; after the signature update; and after the invocation update. As such we created a fine grained change model stored in a subversion repository, with each revision containing one change. We then had RefFinder compare each revision with every other revision in order to find both the smallest and the largest distance between revisions needed to reconstruct a refactoring. 3.2 Results We performed a series of 22 refactorings in 150 commits: 2 instances of IntroduceExplainningVariable; 7 instances of ExtractMethod; 10 instances of MoveMethod; 2 instances of ExtractSubclass and a single instance of ReplaceConditionalWithPolymorphism. The repository is published on figshare [30]. We compared all possible pairs of these 150 commits with RefFinder. That is, we compared revision 1 to revisions 2 to N, then we compared revision 2 to revisions 3 to N, and so on. We looked at the refactorings that RefFinder reconstructed all together, and summed up all the unique distinct refactorings it could reconstruct in all pairs of revisions. We found that RefFinder reconstructed 100 refactorings, of which 40 were false positives, 19 were true positives and 41 were neither, but could be considered as side effects (or subrefactorings) of the performed refactorings. For instance, a MoveMethod usually also involved the removal of a parameter, as the class to which the method was moved used to be a parameter. Another example was the ExtractSubClass refactorings, that also consisted of 3 MoveMethod refactorings. Additionally there were three false negatives: two instances of ExtractMethod and one instance of MoveMethod refactorings that we performed, but that RefFinder did not manage to reconstruct. The important thing to note is that most of the refactorings we performed were reconstructable by RefFinder at one point or another. In Figure 4 we show the minimum and maximum windows under which these occur. The maximum window is shown below the minimum window; the latter shows the revisions where the refactoring was actually performed. The maximum window denotes up to which point, either before or after the minimum window boundaries, RefFinder is capable of identifying the performed refactoring. To the right of the figure, each refactoring is numbered for easy referencing in the following paragraphs. We see that refactorings 7, 8, 13, 16, 17, 18, 19 and 20 have a window that reaches to the HEAD revision, which implies that these refactorings were not masked by any other changes. The other eleven refactorings are at one point masked by other changes, hence no longer reconstructable. The first refactoring masking instances that we want to highlight are the refactorings 3, 6, 10, 12 and 14. These are one ExtractMethod and four MoveMethod refactorings that are masked by changes other than refactoring operations. The ExtractMethod was no longer reconstructable, since we completely changed the code inside the extracted method. It was changed in such a way that it semantically did more or less the same thing, but syntactically it was completely different. A similar observation can be made with the four MoveMethod refactorings that are masked by non-refactoring changes. The other six refactorings are masked by other refactoring operations (indicated with the dashed lines). The first are the two IntroduceExplainningVariable refactorings (refactorings 1 and 2), these were hidden by MoveMethod 3 (refactoring 8), as this move operation moved the method in which the variables were introduced. The figure then also shows that four ExtractMethod refactorings were hidden by a MoveMethod refactoring. What happened is some code was extracted to a method and this newly created method was then moved to a different class, at which point it was no longer possible to reconstruct the ExtractMethod refactorings. A special case is Extract Method 2, which is shown twice in the figure (as refactorings 4 and 5). This is because this refactoring extracted a duplicate series of statements from two methods into a new method. RefFinder identified this as two distinct ExtractMethod refactorings. We conclude from this experiment that the minimum window in which a refactoring can be identified needs to be comprised of, at least, the changes of that refactoring. The maximum window in which the refactoring can be reconstructed is uncertain, as a refactoring can be hidden by other operations. In our case study, we have observed that a refactoring can always be reconstructed as long as no other changes act on the same source code entities as that refactoring. One could argue that it is possible to write new rules for RefFinder that identifies a combination of refactorings, but this is an infeasible approach since we can not be expected to devise rules for every possible combination of refactoring operations. As an answer to RQ 1, we conclude that a snapshot-based approach fails to reconstruct refactorings when other edit operations act on the same source code entities. We then say that the refactoring is “masked” by those other changes. 4. COMPARING REFFINDER TO CHEOPSJ In this section we address RQ 2: Do fine-grained changes allow us to reconstruct refactorings where snapshot-based approaches fail? We search for instances of refactoring masking in two real-world cases and see whether a change-based approach is capable of reconstructing refactorings where snapshot-based approaches do not. 4.1 Experimental Setup We used two larger open source cases—PMD and Cruisecontrol. PMD\(^3\) is a source code analyser to find a variety of common mistakes like unused variables, empty catch blocks, unnecessary object creations, and so on. Cruisecontrol\(^4\) is a framework that allows for creating a custom continuous integration process. These projects were selected as they are written in Java and their development history is freely available on a subversion repository. More importantly, the developers of these two software projects used the commit messages to consciously document some of the applied refactorings. We could thus mine these commit messages using a simple grep command to identify those revisions with documented refactorings. We searched for commit messages that contain the terms “refactor(-ed,-ing)”, “move(-d)” or “rename(-d)”. We sampled these two projects for cases of refactoring masking and selected 10 revisions containing masked refactorings. Tables 1 and 2 show the selected revisions in which we have a total of 26 masked refactorings. These revisions were selected with the following criteria: each revision should be one with refactoring documented in the commit message; RefFinder should be unable to reconstruct any refactorings; and a manual analysis of the revision should show that there were actual refactorings performed and that RefFinder’s refactoring reconstruction failed due to refactoring masking. Note that this is a very conservative way of looking for cases of refactoring masking, since we cannot claim that there is no refactoring masking in either the revisions that had no documented refactorings or the revisions where RefFinder did find refactorings. In these cases of refactoring masking, we set out to re-perform those refactorings while recording our changes with ChEOPSJ in order to obtain a fine grained change model. Suppose revision \(R_x\) is documented as a refactored version of revision \(R_{x-1}\) and RefFinder cannot identify the performed refactorings because they were masked by other changes. In this case we checked out revision \(R_{x-1}\) and distilled an addition change for every source code entity in this revision to have a starting set of changes. We then started recording the changes and performed the refactorings that we identified during our manual analysis in order to obtain the fine-grained change history. At the end of re-performing the refactorings, we verified (using Eclipse’s built in compare support) that our changed version of the system matched revision \(R_x\). We could then export our graph of recorded changes to a Groove readable format and had Groove perform our graph transformation rules to reconstruct the refactoring instances. 4.2 Refactoring Masking Instances In the PMD project we analysed 6 revisions, where RefFinder was unable to reconstruct the refactorings performed due to refactoring masking. The details of these six revisions are in Table 1. In all of the cases, the masking involved an EXTRACTMETHOD refactoring followed directly by a MOVEMETHOD or PULLUMP METHOD refactoring. This means that the developers extracted a piece of code and immediately moved it to a class where they felt it belonged. RefFinder is unable to reconstruct either of these refactorings. For the EXTRACTMETHOD it looks for a newly created method in the same class, but the method is already moved to a different class. For the MOVEMETHOD it looks for the class from which the method originates, but there was no such method to begin with. A typical example is shown in listings 1 and 2. An ExtractMethod extracted some code from the visit method in the class EmptyFinallyBlockRule into a new method called getFinallyBlock. This method was then moved to the class ASTTryStatement. ### Listing (1) PMD revision 1084. ```java package net.sourceforge.pmd.rules; public class EmptyFinallyBlockRule extends AbstrRule { public Object visit(...) { // some Code that is extracted and moved } } package net.sourceforge.pmd.ant; public class ASTTryStatement extends SimpleNode { public ASTBlock getFinallyBlock() { // some Code that is extracted and moved } } ``` ### Listing (2) PMD revision 1085. ```java package net.sourceforge.pmd.rules; public class EmptyFinallyBlockRule extends AbstrRule { public Object visit(...) { ASTBlock finallyBlock = node.getFinallyBlock(); } } package net.sourceforge.pmd.ant; public class ASTTryStatement extends SimpleNode { public ASTBlock getFinallyBlock() { // some Code that is extracted and moved } } ``` ### 4.3 Change-based Reconstruction Our approach for refactoring reconstruction based on recorded changes was capable of reconstructing all refactorings for which we currently have rules. More precisely, our approach allowed us to identify 12 out of 26 refactorings that, for RefFinder, were masked. In RefFinder the rule to reconstruct a RenameMethod refactoring that we performed in PMD as well as the two MoveMethod refactorings we performed in Cruisecontrol. As an example we present the results of the refactoring pattern reconstructed from the changes we performed to go from revision 658 to revision 659 in PMD (Figure 6) and the patterns reconstructed from the changes between revision 878 and 879 in Cruisecontrol (Figure 7). All other resulting graphs can be implementation. Another similar operation was in revision 2257, where they moved a few statements from one method to another method in a different class. One way of doing this is simply cutting and pasting the code from the one method to the other, which is probably what the developers did. However this same result could also be achieved by performing an ExtractMethod and a MoveMethod to get the piece of code to the right class. Then the invocation to the new method needs to be removed from the original method and an invocation needs to be added in the place where we want the code. To finish off an InlineMethod would put the code where we want it. Cruisecontrol also provided us with a few examples of masked RenameMethod refactorings. In one case (revision 879), this refactoring is hidden by a RenameField refactoring. In RefFinder the rule to reconstruct a RenameMethod checks whether the method body has a certain similarity. In this case the method body consisted of a single statement: a return statement returning the value of the field. Since the field itself was also renamed to a name that is very different, the method body was no longer similar enough to count as a RenameMethod. found on figshare [31]. Figure 6 shows that there was a PullUpMethod refactoring that removed two instances of a method named “getEndName” in two subclasses of the class “UnusedCodeRule” and an addition of a method by the same name in the superclass. Figure 7 shows that in Cruisecontrol we were able to reconstruct two refactorings from a set of changes: one is the RenameField refactoring that removes the attribute _now and adds the attribute timeOfCheck; and the RenameMethod refactoring that changes the name of this attribute’s getter from getNow to getTimeOfCheck. We conclude that the presence of fine-grained changes allows us to reconstruct refactorings where snapshot-based approaches fail. Indeed, we have found several occurrences of masked refactorings in two real-world open sourced cases, all of which RefFinder was unable to reconstruct. In contrast, our change-based approach was able to reconstruct 12 out of 26 masked refactorings; the other refactorings could be reconstructed by extending the source code model (see Figure 2) and defining new rules for these particular refactorings. As such we effectively answered RQ 2. 4.4 Is this relevant? Knowing that change-based approaches are capable of reconstructing refactorings where snapshot-based approaches fail, the natural follow up question is to what extent is this improvement relevant. That is, how often does refactoring masking occur in real software projects? This question is impossible to answer precisely, given that there is no project where all refactoring operations are recorded [17]. Moreover, Negara et al. already reported that on average 30% of refactoring operations do not reach the version control system [18]. Nevertheless, we can make a rough estimate based on the data gathered by the Eclipse Usage Data Collector (UDC)5. This data is made publicly available and Emerson Murphy Hill et. al. have put the whole data set on Google BigQuery, which enables us to query and process this data using Google’s storage and compute infrastructure [28]. We know that snapshot-based approaches typically fail when several edit operations act on the same source code entities. In particular, sequences of ExtractMethod, MoveMethod (and in the case of Cruisecontrol also a RenameMethod) operating on the same segments of code are likely to cause misses. From the UDC dataset it is clear that the RenameMethod operation is by far the most used automated refactoring in Eclipse; Move-Element- and ExtractMethod are also among the top most used automated refactoring operations. This gives a first indication that refactoring masking is relevant. We are most interested in the combination of ExtractMethod and MoveMethod, so we looked at the edit operations that occurred within 5 minutes after an ExtractMethod operation (see Figure 8). Here we assume programming locality, that is two edit operations that occur closely together are likely to act on the same source code entities [23]. Move-Element- appears at the nineteenth position, but the top three operations are Delete, Paste and Copy; which serve as a manual substitute for a move. Launching a query which counts all ExtractMethod operations that are followed within 5 minutes by either a MoveMethod or by a Copy, a Paste and a Delete, we found 10,869 instances out of a total of 43,602, thus almost 25%. Note that this is a conservative estimate, as these are the situations where we know a snapshot-based approach will fail. In reality, there are likely to be more, since we only took into account those change sequences which start with an extract method. Therefore, we argue that a snapshot-based approach is likely to miss a significant amount of the refactoring sequences, hence that the potential improvements induced by a change-based approach are indeed relevant. 5. THREATS TO VALIDITY We now identify factors that may jeopardise the validity of our results and the actions we took to alleviate the risk. Consistent with the guidelines for case studies research [26, 40] we organise the identified threats into four categories. Construct validity – do we measure what was intended? We relied on the versioning system’s log messages to identify revisions corresponding to refactorings. Since no strict conventions are in place for what should be specified in such messages, there may be significant differences in the content and quality of log messages across tasks and developers. Consequently, we might miss certain revisions which do correspond to refactorings. However it was never our intent to find all instances of refactorings that occurred in the system’s evolution. In that sense, using this simple way of locating instances of refactorings is sufficient for our purposes. An additional threat could be that the expertise and experience of developers plays a key role in this application. A developer that knows the purpose of the different refactorings, might be less inclined to perform floss refactoring and actually commit the refactoring as a whole to the source code management system. Moreover, the fine-grained recorded changes did not exist in the original sample from the repositories so we had to manually re-perform them. These changes might not be the ones applied by the developers. The transformations we performed, in order to obtain the fine-grained change history, are just one possible change-sequence from many potential scenarios. We used our expertise to choose the transformations that we would have applied. We verified (using Eclipse’s built in compare support) that our changed version of the system matched the actual revision in the repository. Internal validity – are there unknown factors which might affect the outcome of the experiment? The substitution of “real” recorded changes by a manual synthetic reproduction of them might be a confounding factor for our results. The improvement in the number of masked refactorings detected by our approach over RefFinder might not be due to the availability of fine-grained changes but to the particular change sequences we manually applied. To reduce this risk, the first two authors of this paper proposed and discussed these change sequences in order to come up with the transformations that, in our opinion, are closest to the real ones. As already mentioned, there can also be a problem of selection bias caused by how we decided on the refactored revisions we used as experiment subjects. The need to verify RefFinder results and to manually apply the fine-grained changes, led us to sample only those revisions whose commit messages mentioned refactoring operations. In order to alleviate the potential bias, we run the experiments in a toy example and in two different open source systems. Similar results were obtained from them. External validity – to what extent is it possible to generalise the findings?. In this study we investigated two cases: Cruisecontrol and PMD. We chose them to be sufficiently different, yet, with only two data points, we cannot claim that our results generalise to other systems. The results are also dependent on the number of refactoring detection rules we have implemented. The instances of refactoring masking we have analysed might very well appear in other systems. We cannot however make any claims about other types of (as yet unidentified) refactoring masking. According to [19], changes in snapshot-based versions are often obscured by other changes. We can expect the refactoring masking problem to be more prominent than what we have inspected. We should also expect different behaviours when developers commit to SVN or Git. It has been observed that programmers commit more often to Git repositories, resulting also in smaller commits [2]. The need for a change-recording mechanism might not be so important in the context of Git repositories, although it also depends on whether the developers use commit squashing (grouping several related changes in one single commit). Reliability – is the result dependent on the tools?. We used RefFinder to construct the baseline for our experiments. The tool might have produced false positives and false negatives, wrongly identifying some refactoring while missing others. On the one hand, this is the best existing tool for refactoring detection. Therefore, despite of the possible errors, it still serves well as a baseline to compare with. On the other hand, we have manually verified the refactorings detected by RefFinder thus, reducing the risk of false positives. In order to implement our approach, we relied on tools of our own making as well as some external tools. Our ChEOPSJ tool is implemented as an Eclipse plugin and relies on Eclipse’s internal java model which can be considered to be a reliable tool. In order to reduce the bias caused by possible bugs and errors in the tool, we tested it extensively. Members of the research groups and some developers at a partner company installed ChEOPSJ and acted as beta-testers for three months. This resulted in many bug fixes. 6. CONCLUSIONS In this paper we have shown how a software evolution history comprised of fine-grained recorded changes can be exploited to reconstruct refactorings more accurately than the state-of-the-art snapshot-based technique RefFinder. To provide a more detailed summary of our findings, we review the research questions we have addressed: RQ 1 Under which conditions does a snapshot-based approach fail to reconstruct refactorings? Snapshot-based approaches fail when other edit operations act on the same source code entities. In particular, combinations of ExtractMethod and MoveMethod confuse a snapshot-based approach, since the effect of the former is concealed by the latter. We then say that the refactoring is “masked” by those other changes. Since such simultaneous edit operations might happen at any time, it is impossible to determine the optimal window of changes where snapshot-based reconstruction will still function properly. Hence, the only alternative to faithfully reconstruct refactorings is to have access to the fine-grained changes applied to the code. RQ 2 Do fine-grained changes allow us to reconstruct refactorings where snapshot-based approaches fail? We sampled the version history of two open source projects where the developers made an effort to explicitly document some of the refactorings applied. In particular, we made an opportunistic sample, selecting versions where simultaneous edits on the same entity were performed. Under these conditions, snapshot-based approaches indeed fail to reconstruct the refactorings, while the change-based approach does succeed. Next, we argued that these conditions occur frequently: we estimate that for 25% of all the ExtractMethod refactorings are followed by either a MoveMethod or by a Copy, a Paste and a Delete. Contributions. Over the course of this research, we made the following contributions: - We implemented a tool prototype named ChEOPSJ serving as an experimental platform for conducting feasibility studies with first-class representation of changes in Java. - We demonstrated how this platform can be used to reconstruct refactoring operations from a stream of changes. - We applied the prototype to two cases — PMD and CruiseControl — to compare a change-based approach against a snapshot-based approach. - We demonstrated that the change-based approach is more accurate than a snapshot-based approach. - We argued that this improved accuracy is relevant by estimating the number of edit operations acting on the same source code entities within 5 minutes after an ExtractMethod refactoring. Future work. Our plan for the immediate future is to gather real recorded data and replicate our experiments, thus moving from In-Vitro to In-Vivo research [4]. We are currently deploying the change-recording plugin at some partner companies which should result in detailed streams of changes where we can interview the developers about the details of the refactorings. Next we plan to implement additional detection rules for other refactorings besides the 11 listed under Section 2.2, to investigate other refactoring masking conditions. Particularly, interesting in that respect would be a more detailed change model fully representing AST entities below the method signature level. Our findings together with similar results obtained by others (i.e., Spyware [25], Syde [12], Cheops [9], OperationRecorder [20]), indicate that it is not only feasible but also worthwhile to maintain fine-grained evolution histories of software projects. Given the popularity of Git, which encourages a fine-grained commit behaviour, having an explicit representation of the changes is the natural successor to the current generation of distributed version control systems. 7. ACKNOWLEDGMENTS This work has been sponsored by (i) the Interuniversity Attraction Poles Programme - Belgian State - Belgian Science Policy, project MoVES; (ii) the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen) under project number 120028 entitled “Change-centric Quality Assurance (CHAQ)”. We hereby express our gratitude to Arend Rensink for his quick response on the Groove discussion forum\(^6\), as such helping us out with the Groove Syntax. 8. REFERENCES \(^6\)http://sourceforge.net/p/groove/discussion/407076/thread/8eb1f2c4/
{"Source-Url": "https://pure.tudelft.nl/portal/files/47060071/soetensIWPSE2015_2.pdf", "len_cl100k_base": 8270, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34245, "total-output-tokens": 11182, "length": "2e13", "weborganizer": {"__label__adult": 0.0003466606140136719, "__label__art_design": 0.00026869773864746094, "__label__crime_law": 0.00030493736267089844, "__label__education_jobs": 0.00083160400390625, "__label__entertainment": 4.13060188293457e-05, "__label__fashion_beauty": 0.00015175342559814453, "__label__finance_business": 0.0001437664031982422, "__label__food_dining": 0.00025272369384765625, "__label__games": 0.0005092620849609375, "__label__hardware": 0.0004661083221435547, "__label__health": 0.0003554821014404297, "__label__history": 0.00018393993377685547, "__label__home_hobbies": 6.729364395141602e-05, "__label__industrial": 0.0002505779266357422, "__label__literature": 0.00023055076599121096, "__label__politics": 0.00018405914306640625, "__label__religion": 0.00036406517028808594, "__label__science_tech": 0.00429534912109375, "__label__social_life": 7.748603820800781e-05, "__label__software": 0.0043792724609375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.00032329559326171875, "__label__travel": 0.0001665353775024414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46897, 0.02995]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46897, 0.44659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46897, 0.92459]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4593, false], [4593, 9924, null], [9924, 14385, null], [14385, 20871, null], [20871, 24416, null], [24416, 27447, null], [27447, 33909, null], [33909, 36485, null], [36485, 40244, null], [40244, 46897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4593, true], [4593, 9924, null], [9924, 14385, null], [14385, 20871, null], [20871, 24416, null], [24416, 27447, null], [27447, 33909, null], [33909, 36485, null], [36485, 40244, null], [40244, 46897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46897, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4593, 2], [4593, 9924, 3], [9924, 14385, 4], [14385, 20871, 5], [20871, 24416, 6], [24416, 27447, 7], [27447, 33909, 8], [33909, 36485, 9], [36485, 40244, 10], [40244, 46897, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46897, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
d1a31d6a4259e644b758d3fb28f9ebaaa27f03b3
A Speech Recognition Module for Speech-to-Text Language Translation by Sadiki Pili Mwanyoha Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degrees of Master of Engineering and Bachelor of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 1998 © Massachusetts Institute of Technology 1998. All rights reserved. Author Department of Electrical Engineering and Computer Science May 18, 1998 Certified by... Clifford J. Weinstein Group Leader, Information Systems Technology Group, MIT Lincoln Laboratory Thesis Supervisor Accepted by Arthur C. Smith Chairman, Department Committee on Graduate Theses A Speech Recognition Module for Speech-to-Text Language Translation by Sadiki Pili Mwanyoha Submitted to the Department of Electrical Engineering and Computer Science on May 18, 1998, in partial fulfillment of the requirements for the degrees of Master of Engineering and Bachelor of Science Abstract This thesis involved the design and implementation of a speech recognition module to be used in a speech-to-text translation system. The module accepts continuous (English) speech input from a task-specific grammar. The two main areas of system development were: 1) speech modeling and 2) interfacing a recognition kernel to the GUI of an existing automated language translation system. The performance of several candidate speech models are explored. A detailed discussion of the kernel interface is also presented. Future applications center on facilitating communication in multinational environments. Thesis Supervisor: Clifford J. Weinstein Title: Group Leader, Information Systems Technology Group, MIT Lincoln Laboratory Acknowledgments First, I want to thank my thesis advisor for helping me to set realistic goals, helping me maintain my self-esteem, and looking for the benefits behind every apparent mishap. Next, I would like to thank Young-Suk for putting up with my stifling appetite for disk space and always showing an interest in my work, using kind, encouraging words. Lastly, I want to thank Linda for always lending an ear, helping me to see the humor in everyday academia, and giving me sound advice on a wide range of topics. # Contents 1 Introduction ........................................................................... 8 1.1 Problem Statement ..................................................... 9 2 Background and Related Work ........................................... 11 2.1 Background .................................................................. 11 2.1.1 TINA-GENESIS .................................................... 11 2.1.2 Speech Recognition Primer ........................................ 12 2.2 Related Work ............................................................. 21 2.2.1 Parameter Interpolation ........................................... 21 2.2.2 Previous Live Input System ...................................... 25 3 Design ................................................................................. 27 3.1 HTK™ Development Environment ................................... 27 3.1.1 Data Preparation .................................................. 27 3.1.2 Acoustic Model Training ........................................ 28 3.1.3 Language Model Training ....................................... 31 3.2 Enabling Live Input ....................................................... 33 4 Results ................................................................................ 39 4.1 Methods of Evaluation .................................................. 39 4.2 Results Presented ........................................................ 40 4.3 Discussion .................................................................... 41 5 Future Work ........................................................................ 43 5.1 Speech Modelling ............................................. 43 5.2 Interface to HTK ............................................. 44 A Concise hvite-interface Manual 45 List of Figures 2-1 Process flow for English-to-Korean text translation ............... 12 2-2 A 5-state HMM .................................................................. 15 2-3 A typical 3-word tree .......................................................... 16 3-1 Nonlinear mapping of probabilites when scale factor \( s \) is applied to (log) probabilities. .......................................................... 33 3-2 HVite state diagram .............................................................. 35 3-3 hvite-interface flow diagram ................................................. 38 List of Tables 2.1 Maximum likelihoods and \( \tilde{\lambda}_{opt} \) for different block sizes when performing DI on the OR text corpus. ........................................... 25 4.1 Model set results. Statistics shown are word-accuracy, sentence-accuracy, insertions (I), deletions (D), and substitutions (S). ......................... 41 4.2 Model set statistics. .................................................. 41 Chapter 1 Introduction To date, researchers have invested considerable effort in automating the translation of human language. The ability to translate human language automatically has the potential to alleviate many of the barriers associated with global communication and has therefore captured the attention of various sectors. In particular, the demand for multilingual communication within the military community has grown steadily, and consequently, provided the motivation for several world-wide efforts in this area. Traditionally, the US Military has relied solely on the services of human translators. But due to the inherent scarcity of human translators, the opportunities for effective multilingual communication are limited. For this reason, automation of the translation function promises to enhance multilingual communication at many levels by supplementing, and eventually replacing human translation. With this purpose in mind, the automatic translation problem can be stated as follows. A translation system should receive a message in the source language as input and then produce this message in the target language as output. The input-output representations of the message to be translated can, in general, be any combination of speech and text (i.e. speech-speech, speech-text, etc.). For example, in a multilingual database query application the input could be either speech or text, while the output would be constrained to text (since databases are text-based). On the other hand, in a document translation situation both the input and output would, of course, be text. A core translation function must convert text in the source language into equivalent (in meaning) text in the target language using state-of-the-art natural language analysis/synthesis techniques. In the case of document translation this core is all that is needed (aside from some preprocessing of the input). However, in speech-input and speech-output applications additional speech-recognition and speech-synthesis modules are needed respectively. Set forth in this paper is a proposal to implement the speech-recognition module required in a English-to-Korean Speech-to-text translation system. 1.1 Problem Statement The US Military has maintained an ongoing presence in Korea. To enable operation in a foreign territory it depends heavily on English-to-Korean translation. As mentioned above, there is much to be gained from the automation of the translation function, so an automatic speech-to-text translation system would be a highly desirable tool. We will refer to the communication context, or task domain, in which this system will function as the Operation Reports (OR) domain. The OR corpus which contains the training and test data necessary for system development consists of 111 sentences represented in both English and Korean text. The sentences were collected from recordings of actual correspondence between officers during the course of military operations. Together, they define a domain-specific grammar around which a translation system can be designed. It is hoped that such a system could serve as a translator's aid during the course of multi-national operations similar to those which generated the OR corpus. It should be noted at this point that this is by no means a first attempt at speech-to-text translation. Consequently, a mainstream approach to speech-recognition is taken which specializes to the particular grammars encountered in the OR domain. Furthermore, the core text translation system with which the speech module has been integrated, consists of the TINA and GENESIS modules developed at the Spoken Language Systems Group, MIT Laboratory of Computer Science; these subsystems have demonstrated considerable success in similar projects already. As these modules will play a major role in the aggregate system, they will be discussed in the next chapter along with an introduction to speech recognition technology. Chapter 2 Background and Related Work 2.1 Background 2.1.1 TINA-GENESIS This project has been completed under the Information Systems Technology Group (IST), MIT Lincoln Laboratory. IST has been the first group to use TINA-GENESIS to solve language translation problems and the two modules continue to play an integral role in all of the group’s translation projects.[12] The approach IST has taken involves the logical breakdown of the translation process into understanding and generation steps performed by TINA and GENESIS respectively. In the understanding step TINA takes source language text as input and using a source grammar and analysis lexicon produces a semantic frame. The semantic frame captures the meaning of the message in that it is a language-independent, or *interlingual*, representation. In the generation step GENESIS uses a target grammar and synthesis lexicon to transform the semantic frame into the corresponding text representation in the target language. To make the process perceptually concrete the reader should note that grammars, lexicon, and messages are implemented as files while TINA and GENESIS are realized with text-processing programs. A pictorial representation of the process flow involved is presented in Figure 2-1.[12] A fundamental advantage of this interlingual approach is that by decoupling the understanding and generation steps the influence of a particular source-target language pair on translation requirements is eliminated. Besides increasing performance, the absence of such an influence can greatly reduce system size and complexity. 2.1.2 Speech Recognition Primer Today’s high-performance speech recognizers almost exclusively rely on a statistical model of speech production which relies heavily on Hidden Markov Models. What follows is a brief introduction to the theory on which the model is based. Like all quantitative models of physical phenomena, a statistical model of speech can only be developed in light of a series of logical assumptions. The first assumption made is that we can define a statistical grammar \((S)\) as the set of all possible word sequences taken from a vocabulary, where each sequence has its own probability of occurrence. Next, we assume that each word sequence \((W)\) generates an acoustic observation sequence \((O)\) with probability \(P(W,O)\). Our estimate \((\hat{W})\) of the word sequence in \(S\) corresponding to a given observation sequence \((O)\) is that sequence which has the maximum a posteriori (MAP) probability. That is, \[ \hat{W} = \underset{W \in S}{\operatorname{argmax}} P(W \mid O), \] which is of course optimal in the MAP sense. Using Bayes' Rule this can be rewritten as \[ \hat{W} = \underset{W \in S}{\operatorname{argmax}} \frac{P(O \mid W)P(W)}{P(O)}. \] Since $P(O)$ does not depend on $W$ this reduces to $$ \hat{W} = \arg \max_{W \in S} P(O \mid W) P(W). $$ Before proceeding further, the reader should note that when the observations take on discrete values $P(\cdot)$ will be a probability. For the continuous case $P(\cdot)$ will be a likelihood. For the rest of the discussion, likelihoods and probabilities will be used interchangeably, so one should keep their respective senses in mind. The author's intention is not to confuse the reader, but rather to emphasize that both discrete and continuous realizations are viable. $P(O \mid W)$ is termed the acoustic model, while $P(W)$ is called the language model. The acoustic model proposes a statistical framework for producing acoustic observations, whereas the language model tries to realize the probabilistic nature of the grammar $S$. We now discuss both models individually. **The Acoustic Model** To model the generation of spoken word sequences we must first decide on a fundamental speech unit from which to build these sequences. Our choice should negotiate the tradeoff between discrimination ability and trainability. That is, the set of speech units we choose should capture the inherent differences in speech sounds, implying a coarse (in time) partitioning; this results in a relatively large number of speech units. On the other hand, the Weak Law of Large Numbers suggests that training a model set on limited sample data requires a lot of data per unit; this implies a fine-grain (in time) partitioning, or relatively fewer speech units. For example, if we chose our speech units to be words, our model's ability to discriminate between words would be excellent, in theory. But the number of models relative to the amount of data available to train them would be prohibitive from an estimation standpoint. In practice, a common choice for the fundamental speech unit is the phoneme. There are between 40 and 50 phonemes in the English language. This loose assessment is indicative of phoneme sets being defined *linguistically* rather than acoustically. Even so, phonetic events are known to occur on the order of a few hundred milliseconds and do provide reasonable acoustic discrimination while being readily trainable. The most basic choice of phoneme models are called monophones. Monophones are context-independent models of speech units. They do not attempt to model the effects that neighboring phonemes (defining a context) have on each other. Such effects are due to a phenomenon called coarticulation. Coarticulation can be described as the merging of adjacent phonemes that occurs during the course of natural speech. A phoneme can be accentuated or deemphasized by the presence of an adjacent phoneme depending on its neighbor. For example, Rabiner shows in [10] how phonemes that appear on word boundaries are accentuated as in the sentence *he eats several light tacos*. This demonstrates only one of numerous types of coarticulation. A way to improve acoustic discrimination at the expense of trainability is to move to a much larger set of biphones or triphones, which are context-dependent phoneme models. Biphones model left or right contextual effects, while triphones model both left and right ones. Contemporary approaches to modeling phonemes rely on the Hidden Markov Model (HMM). An HMM is a Finite State Network (FSN) possessing two defining characteristics: 1) FSN state transitions and occupations are probabilistic and 2) each FSN state is associated with an observable output which is also probabilistic. Figure 2-2 shows a five-state HMM that has continuous distributions. The $a_{ij}$ denote transition probabilities while the $p_i$ indicate output distributions. Note also the non-emitting entry and exit states. Their purpose is to enable the concatenation of multiple HMMs, which is a desirable feature, as we shall see. We model each phoneme by assigning it its own HMM. That is, we assume that a set of acoustic observations can be generated by a corresponding sequence of HMM state occupations. Moreover, the dynamics of these acoustic observations are governed by HMM transition parameters. The acoustic observations are usually chosen to be a speech parameterization derived from frequency measures, such as Mel-scale cepstra, or speech coding, such as linear prediction coefficients. Given the occurrence of a particular phone, the observations form a vector of random variables obeying the state output distributions of the corresponding HMM; these random variables are often termed features. State output distributions are simple probability mass functions in the discrete case. In the continuous case it is customary to choose Gaussian mixture densities, which are entirely determined by means, covariances, and mixture weights. Recall that the acoustic modeling problem arose from the need to compute \( P(O \mid W) \), or the probability that a specific word sequence \( (W) \) would generate an acoustic observation sequence \( (O) \). This quantity is readily computable once we have assumed a speech production model at the word sequence level. Logically, the desired model is no more than the concatenation of the HMMs constituting each of the words in the given sequence; in practice the necessary HMM substitutions are made using a pronouncing dictionary. The result is a single monolithic chain of HMMs. To compute \( P(O \mid W) \) we first note that there are many possible state sequences which could produce \( O \). For example, two consecutive observation samples could be produced by a double occupation of one state or two single occupations of adjacent states. The likelihood of a particular sequence is the product of the transition probabilities and output likelihoods associated with that sequence. That is, for a state sequence \( Q = q(1), q(2), ..., q(T) \), \[ P(O, Q \mid W) = a_{q(0)q(1)} \prod_{t=1}^{T} b_{q(t)}(o_t)a_{q(t)q(t+1)}, \tag{2.4} \] assuming independence of observations; where $q(0)$ and $q(T + 1)$ are the entry and exit states of the model for $W$; where $a_{ij}$ is the probability of transitioning from state $i$ to $j$; and where $b_j(o_n)$ denotes the output distribution of state $j$ evaluated at the $n$-th observation sample. The total likelihood is then the sum of $P(O, Q | W)$ over all state sequences $Q$. A useful approximation to the total likelihood is the likelihood of the most likely state sequence. This quantity can be computed very efficiently using a dynamic programming technique called the Viterbi Algorithm. The Viterbi Algorithm assigns an observation sequence to a state sequence by considering one observation sample at a time and choosing the maximum likelihood path.[13, p. 48] Now that the acoustic model has been formulated we can move onto the language model. **The Language Model** The language modeling problem involves the determination of $P(W)$, or the probability of an observed word sequence $W = w_1w_2...w_p$. We start by representing the sample space of all possible word sequences by an $M$-ary tree, where $M$ is the size of the vocabulary. An example is shown in Figure 2-3. A word is represented by a tree node and it can be followed by any other word. The root node marks the beginning of the sentence. The inter-word transitions are denoted by arcs and, unlike HMM transitions, are dependent on the prior word sequence from the root node to the active node. Our tree-based model is justified considering that \( P(W) \) can be expressed as \[ P(w_1w_2...w_P) = P(w_1)P(w_2 | w_1)P(w_3 | w_1w_2)...P(w_P | w_1w_2...w_{P-1}). \tag{2.5} \] Essentially, we have constructed a tree-shaped FSN. So, after assigning each factor to the appropriate arc we can compute \( P(W) \) by multiplying the transition probabilities associated with the path corresponding to \( W \). Practically, this construction is not feasible because it implies the estimation of conditional probabilities for a wide range of possible word combinations and sequence lengths. Estimation of such a large number of probabilities would inevitably suffer from lack of training data. One solution is to use the approximation \[ P(w_i | w_1w_2...w_{i-1}) \approx P(w_i | w_{i-N+1}...w_{i-1}) \tag{2.6} \] In other words, an inter-word transition depends only on the prior \( N - 1 \) words. This simplification leads to the so-called \( N \)-gram grammars. Two common choices are the bigram (\( N=2 \)) and trigram (\( N=3 \)). If \( N = 1 \) there is no dependence on the prior sequence, so this case corresponds to a no-grammar or unigram grammar.\[10, \ p. \ 447\] Knowing how a particular language model compares to others is just as important as being able to construct it. It is therefore appropriate at this point to introduce the concept of \textit{perplexity}. Perplexity is an objective measure of language model complex- ity used extensively in the development process. It represents the average number of candidate words that the recognizer must consider solely on the basis of their acous- tic likelihood. In other words, perplexity rates the ability of the language model to eliminate unlikely words from consideration; one can think of it as a branching fac- tor. Higher perplexities mean greater complexity and, in general, poorer recognition performance. The perplexity, \( P_e \), is defined as, \[ P_e = 2^{H_e}, \] where \[ H_e = \frac{\sum_k P(S_k)}{\sum_k |S_k|} \] (2.7) \( H_e \) is the entropy, or average number of bits, needed to encode the branching factor. \( |S_k| \) is the length of the \( k \)-th sentence. \( P(S_k) \) is the probability that the language model will produce sentence \( S_k \). Perplexity is useful when developing a language model as we shall see in the next chapter.[14] Thus far, we have modeled a statistical grammar and presented a measure of its complexity. The grammar uses a tree-shaped FSN in much the same way we modeled speech production using HMMs. As seen shortly, this similarity facilitates the merging of the two models during recognition. The Recognition Process So far, we have developed approximations to the acoustic and language models of Equation 2.3. We now address the problem of practically integrating the two models during recognition. Equation 2.3 is an MAP decision rule which requires considering the acoustic and language likelihoods of the set \( S \) of all possible word sequences of finite (but arbitrary) length. This is much too expensive from a computation standpoint, so a recognition approach which only considered a small subset of \( S \) would be desirable. Such an approach should be able to disqualify unlikely sequences while completing likelihood calculation of more likely candidates. An effective way to approximate the exhaustive search technique exploits the fact that the acoustic and language models both employ the FSN paradigm. To begin, the HMM substitution step used in the acoustic model is applied to the language model word-tree. The result is a monolithic tree of HMMs. For reasons we will see in a moment, we would like to calculate the total likelihood of a partial word sequence given an observation. To do this we start at the root node and take the product of the N-gram probabilities and HMM likelihoods (via the Viterbi Algorithm) encountered while charting the appropriate path through the word-tree. The total likelihood of a partial word sequence gives us a criterion for separating good candidates from bad candidates at each level of the word-tree. So, at each level of the word-tree we consider all possible word transitions and reject those candidate sequences which are relatively unlikely. This streamlining of the word sequence search is called pruning. The pruned search is terminated when the last observation sample has been assigned to an HMM state. The chosen sequence is that survivor having the highest likelihood. Parameter Estimation We have assumed thus far that all transitions, output distributions, and N-grams were exactly known. In reality, however, they are not and must be estimated. The most common means for estimating transitions and distributions is the so-called Baum-Welch method named after its inventors.[2] Essentially, the algorithm iteratively computes maximum likelihood estimates which are based on sample means and variances appearing in a training data set (consisting of parameterized speech). It can be shown that this method is equivalent to maximizing the likelihood that a given model would produce the corresponding training data.[10, 7] A widely used method of effecting a good discrimination/trainability trade-off is that of parameter tying (or clustering). Parameter tying involves the sharing of parameters between models. By forcing two models to share an output distribution, for example, this method effectively increases the amount of training data per parameter; this obviously boosts the reliability of the estimate. It is clear, however, that clustering cannot be performed indiscriminantly since the merging of two completely different monophones, say s and m, would result in a loss of model validity. For this reason, the decision whether to tie two parameters is usually based on some measure of their similarity. Typically, Euclidean distance measures are used for these purposes. An alternative criterion used to regulate the use of tying uses the so-called Maximum Mutual Information (MMI) principle. This technique permits tying only in cases where the loss in entropy caused by tying is minimal. N-gram estimates can be derived from the relative frequencies of N-length sequences appearing in a training (text) corpus. In instances where a particular N-gram is sufficiently scarce it can be helpful to transfer probability mass from the most frequent N-grams to the one in question to maintain robustness. This leads to the so-called back-off N-gram, where the name reflects the understating of frequent N-grams. Another approach is to estimate specific N-grams by smoothing them with more general N-grams. The motivation is that detailed N-grams are statistically similar to their generalizations, which, in turn, have more reliable frequencies. An example might be the estimation of trigram probabilities based on trigram, bigram, and unigram frequencies. \[ \hat{P}(w_3 \mid w_1, w_2) = p_1 \frac{F(w_1, w_2, w_3)}{F(w_1, w_2)} + p_2 \frac{F(w_1, w_2)}{F(w_1)} + p_3 \frac{F(w_1)}{\sum F(w_1)} \quad [10, p.448] \quad (2.8) \] Alternatively, the generalized N-gram sets used for averaging can also be based on grammatical rules as shown by Jelinek.[3] To illustrate, a detailed N-gram might be THE-HOUSE. One of its generalizations might be (ARTICLE)-HOUSE, where (ARTICLE) denotes any article. For the remainder of this paper we will refer to this as a POS, or part-of-speech, bigram. This modeling technique tries to balance the discrimination/trainability trade-off. In the literature it is referred to as parameter smoothing and has been applied to several other problems in speech recognition. An alternative which shares some elements of the first two approaches averages the detailed N-grams from the training corpus with the detailed N-grams of a larger, more reliable one. This concludes our outline of current speech recognition techniques. At best, it is a cursory treatment of an intricate estimation theory and should be regarded only as suggestive of contemporary system requirements. 2.2 Related Work In this section, we introduce two projects that are related to, but not part of, the principle work under discussion. 2.2.1 Parameter Interpolation As discussed in the speech recognition primer, it is often useful to smooth bigram estimates by taking a linear combination of two or more different distributions. Such an approach is referred to as parameter interpolation. We chose to consider using parameter interpolation to synthesize an OR language model because this method has been successful in prior work. Preliminary results, however, revealed that in actuality this technique was not appropriate for the OR task domain. Nevertheless, these results possess diagnostic value and are, therefore, interesting in their own right. We now present the theory of parameter interpolation, followed by a discussion of its application to an OR language model. Parameter interpolation is a way to generate a hybrid probability distribution from an optimal combination of two or more component distributions; the combination is usually linear. The optimality criterion used is often based on the Maximum Likelihood (ML) concept. ML was discussed earlier in the context of Viterbi decoding, a classification problem. It has also been applied to the estimation of probability distribution parameters or parameter estimation. Parameter interpolation is one of several approaches to parameter estimation which relies on the ML concept. Since ML is so central to this technique, it is appropriate at this point to present its fundamental principles in the context of parameter estimation. ML, popularized by R.A. Fisher, is a method of estimating probability distributions that optimally matches an assumed distribution to a sample of observed data. To use this method, one firsts assumes that the random variable under study, a bigram for instance, can be described by a distribution having an explicit functional form, within some parameter vector $\lambda$. Formally, we assert that a random variable $x$ has a probability density $P_x(x, \lambda)$. The idea is to use observed data to estimate $\lambda$. 21 Assuming statistical independence of observations, the likelihood of an observed data set \( \tilde{y} = \{y_1, y_2, \ldots, y_n\} \) is simply a product of individual likelihoods. That is, \[ \prod_{i=1}^{N} P_x(y_i, \tilde{\lambda}) \overset{\Delta}{=} L(\tilde{\lambda}) \] (2.9) In the literature \( L(\tilde{\lambda}) \) is referred to as the likelihood function. The ML method prescribes that one choose \( \tilde{\lambda}_{opt} \) such that \( \tilde{\lambda}_{opt} = \underset{\lambda}{\text{argmax}} \ L(\tilde{\lambda}) \). From an intuitive standpoint, this is a logical choice for \( \tilde{\lambda}_{opt} \) since the likelihood of an observed data set should be greater than that of an unobserved one (whose likelihood is not available for comparison). In practice, the logarithm of the likelihood is used instead, which simplifies the maximization procedure. A classic example of ML is the estimation of the Gaussian distribution parameters \( \tilde{\lambda} = \{\mu, \sigma^2\} \), which are the mean and variance. Performing the above maximization results in choosing \( \mu \) and \( \sigma^2 \) to be the sample mean and variance respectively. Interpolation is a scheme in which two or more distributions are combined to form a new, hybrid distribution. When the combination is linear, the weights of the component distributions are left as parameters to be estimated. That is, the hybrid distribution is defined as \[ P_x(x, \tilde{\lambda}) = \sum_{k=1}^{M} \lambda_k P_{x,k}(x) \] (2.10) where \( M \) is the number of components used. The \( \lambda_k \) are constrained so that \( \sum_{k=1}^{M} \lambda_k = 1 \) and \( \lambda_k > 0 \). In the discrete case this ensures that \( \sum_x P_x(x) = 1 \). In the special case where the \( P_{x,k}(x) \) are identical this constraint dictates that \( P_x(x) = P_{x,k}(x) \). With this formulation one can preferentially weight component distributions according to how much they increase the likelihood of the data. Combining relations 2.9 and 2.10 yields an expression for the likelihood of sample data \( \tilde{y} \), assuming an interpolated distribution for the random variable \( x \). \[ \prod_{i=1}^{N} \sum_{k=1}^{M} \lambda_k P_{x,k}(y_i) \overset{\Delta}{=} L(\tilde{\lambda}) \] (2.11) Thus, an interpolated distribution yields a particular instance of the \( L(\tilde{y}) \) defined in 2.9, if we assign the distribution weights to the parameter vector \( \tilde{\lambda} \). So, we may use the same technique used in general ML estimation to determine the optimal distribution weights. It is often the case that the component distributions are derived from the same sample data that is used to calculate \( L(\tilde{y}) \). For example, when estimating bigram probabilities by interpolating relative frequencies of bigrams and unigrams appearing in sample text, the same text is then used to generate \( L(\tilde{\lambda}) \). Deleted Interpolation (DI), popularized in the speech community by Jelinek[3], is a method of allowing the \( \tilde{\lambda} \) adjustment process to compensate for unseen data, while retaining the benefits of preferential weighting. DI is performed by first dividing the sample data \( \tilde{y} \) into \( B \) blocks. The likelihood of the deleted block \( q \) is defined as \[ \prod_{i=1}^{N} \sum_{k=1}^{M} \lambda_k P_{x,k}^{(q)}(y_i) \triangleq L^{(q)}(\tilde{\lambda}), \] where \((q)\) indicates that all component distributions are derived from the data samples in the other \( B - 1 \) blocks collectively. So, in the case of interpolated bigrams, relative frequencies from the \( B - 1 \) other blocks are used to generate component distributions appearing in the likelihood of the \( q \)-th block; this block is termed deleted, since it is not used to derive component distributions. The total likelihood is just the product of deleted likelihoods over all blocks \( L(\tilde{\lambda}) = \prod_{q=1}^{B} L^{(q)}(\tilde{\lambda}) \), where \( L^{(q)}(\tilde{\lambda}) \) is as defined in 2.12. This complementary treatment of blocks is based on cross-validation, a statistical concept. It aims to govern distribution weighting such that the ability of component distributions to characterize independent data is taken into account. As before, the weighting vector \( \tilde{\lambda} \) is chosen by maximization of \( L(\tilde{\lambda}) \). We see that the prediction property is encapsulated in this choice of \( \tilde{\lambda} \), since each deleted likelihood can be viewed as the best prediction of the deleted block based on the other \( B - 1 \) blocks. We now move on to the topic of using deleted interpolation to estimate bigrams. Let us denote a word pair as \( w = \{w_1, w_2\} \), where \( w_2 \) follows \( w_1 \). Recall that a bigram grammar associates each word pair occurrence with a probability $Pr(w_2 \mid w_1)$. That is, given that the last word was $w_1$, $Pr(w_2 \mid w_1)$ is the probability that the next word will be $w_2$. This is a conditional probability. When we allow $w_2$ to be any word from a finite vocabulary, we are left with a conditional distribution, $P(w_2 \mid w_1)$. To model all possible word sequences with a bigram grammar such a distribution must be estimated for each prior $w_1$. We decided to construct the estimated distributions from the interpolation of bigram, unigram, and POS bigram frequencies found in the text corpus. The interpolation weights were determined using DI. Applying DI to a text corpus is for the most part straightforward. There is, however, a practical issue that must be addressed when calculating likelihoods. The key observation is that the total likelihood of the data is not simply the product of individual likelihoods pertaining to independent events. Word events appearing in the same sentence, for example, are highly correlated; this is, after all, what we try to exploit in using statistical grammars. To simplify things, let us first consider the likelihood of a single sentence. Fortunately, we have already seen it in equation 2.5. It is the “telescoping” product of conditional probabilities that is used in Viterbi decoding. Remember that we approximated each factor by an N-gram probability. Jelinek has shown that such an approximation can also be used successfully in calculating the likelihood of a sentence observed in a text corpus. In our case the N-grams are a linear combination of bigrams, unigrams, and POS bigrams, whose weights make up the same vector $\lambda$ that we are trying to estimate. So, the approximate likelihood of a single observed sentence is the product of interpolated 2-grams, which are parameterized by $\lambda$. We then assume that 2-grams found in different sentences are independent. This is a good assumption, since the grammar of one sentence should not, in general, be strongly related to the grammar of another. To construct the total likelihood of the data, then, we take the product of likelihoods for each observed sentence. We are left with a function of $\lambda$, $L(\lambda)$, which can be maximized to find $\lambda_{opt}$. <table> <thead> <tr> <th># Blocks</th> <th>$\log L(\lambda_{opt})$</th> <th>$\lambda_{ugm}$</th> <th>$\lambda_{bgm}$</th> <th>$\lambda_{pos}$</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>-659.0</td> <td>0.0</td> <td>1.0</td> <td>0.0</td> </tr> <tr> <td>2</td> <td>-1855.2</td> <td>0.38</td> <td>0.32</td> <td>0.30</td> </tr> <tr> <td>3</td> <td>-1928.4</td> <td>0.38</td> <td>0.33</td> <td>0.29</td> </tr> <tr> <td>4</td> <td>-1922.3</td> <td>0.41</td> <td>0.28</td> <td>0.31</td> </tr> </tbody> </table> Table 2.1: Maximum likelihoods and $\lambda_{opt}$ for different block sizes when performing DI on the OR text corpus. We applied this algorithm to the OR corpus by tabulating the necessary data, determining the likelihood function, and maximizing it to find $\lambda_{opt}$. We considered four different data partitionings corresponding to dividing the data into one, two, three, and four blocks, where the number of blocks is the same $B$ mentioned above. Notice that the single block case reduces to classic ML estimation; there are no deleted blocks. The results are shown in Table 2.1. As expected for the ML case ($B = 1$), the bigram relative frequency estimator received the maximum weight of unity. This agrees with the fact that relative frequencies are, in general, maximum likelihood estimates. For deleted interpolation ($B > 1$) we see that the unigram receives the most weight in all cases. This is consistent with the fact that unigrams appear in significantly higher numbers than bigrams and are, therefore, more reliable estimators. POS and standard bigrams seem to be slightly less significant. Based on these results, we concluded that the inclusion of POS bigrams in language model estimation would not result in any significant improvements over the back-off bigram language model, which incorporates both bigrams and unigrams. ### 2.2.2 Previous Live Input System The original live input module used in the Translator’s Aid system used a modified version of HTK™ release 1.4A.¹ Specifically, modifications were made to an --- ¹HTK™ is a software environment for developing speech recognition systems; it will be introduced in the next chapter. HTK® recognizing tool called HVite. The modifications allowed custom parameterization in the form of mel filter banks, which at the time, was an important feature to the original designers. Control of the audio device was transferred from HVite to auxiliary modules. Requests to start/stop record were initiated via file creations. For example, the auxiliary modules directly controlling the audio device would periodically check for the existence of io.go and io.stop files. io.go indicated that the audio device should begin reading audio data into its buffers, while io.stop meant that it should stop reading data. At a higher level, the files were created within the GUI in response to user requests to start/stop recording; requests were initiated by the user pressing the appropriate button on the GUI. The buffered data was sent through a pipe to a mel filter bank module whose output was sent to the customized version of HVite which was altered to accept standard input. The speech recognition development for the current project relied on the most recent version of HTK® (release 2.1). As such, it was necessary to use HVite release 2.1 to implement a live input recognition system. Enabling the Translator’s Aid System to accommodate the latest HTK® formats was only a minor benefit, however. The major advantage in upgrading to release 2.1 stems from the versatility possessed by HVite in controlling the audio device. We will revisit this recording issue in the next chapter. Fortunately, custom parameterization was not an issue, since all development used native HTK® parameter types. For that reason, we were able to use HVite (release 2.1) without modification. Chapter 3 Design 3.1 HTK\textsuperscript{TM} Development Environment As mentioned earlier, the OR corpus is a set of text sentences which define a domain-specific vocabulary and grammar. In cases such as this, where the corpus includes no acoustic data, it is customary to train the acoustic models using a standard database and then use the text corpus to estimate parameters of the language model. This process is enabled by a software-based development tool called the Hidden Markov Model Toolkit\textsuperscript{TM}(HTK\textsuperscript{TM}). HTK\textsuperscript{TM} was developed at Cambridge University’s Speech Vision and Robotics Group and is commercially distributed by Entropic Research Laboratory, Inc. It provides all the computational features needed to synthesize continuous and discrete density recognition systems. For instance, HTK\textsuperscript{TM} offers ready-to-use programs which perform Baum-Welch estimation and Viterbi search. In addition, the toolkit can calculate bigram statistics needed in the language model. 3.1.1 Data Preparation It should be emphasized that HTK\textsuperscript{TM} relies on externally supplied speech data to perform training and testing. For the development of acoustic models we have been using the Wall Street Journal (WSJ) corpus prepared by the Linguistic Data Consortium (LDC). The WSJ training data that we have used consists of about 30,000 utterances covering 200 speakers along with word transcriptions. All utterances were recorded using Seinheiser noise-cancelling microphones at a 16 kHz sampling rate. Recall that a pronouncing dictionary is required to expand words into their equivalent HMM chains. The pronouncing dictionaries we used were generated at LDC and Carnegie Mellon University. Neither dictionary could fully cover the entire vocabulary appearing in the WSJ corpus; this created the need for both. The resulting composite dictionary used a 39 phoneme set and covered nearly all the words appearing in the WSJ utterances we used. The residual entries were entered by hand. The next step in the data preparation process involves parameterizing the recordings followed by compressing and storing the parameterized data using the appropriate HTK™ tools. The ASCII files containing the word transcriptions can also be converted to the HTK™ specific file format at this point; this is accomplished with shell scripts. It should be noted that WSJ transcriptions are word-level and not phone-level; time-alignment information was not available. So, we were forced to use the so-called flat-start approach. Time-alignment is usually performed by a speech expert during corpus development. An attempt is made to identify the approximate time intervals corresponding to each phone appearing in the utterance. This interval information is then included in each transcription. Time-alignment must be done by hand and is very tedious. As such, it is common to omit them in large corpuses such as WSJ. In these cases all phones are assumed to have equal duration. This has the consequence of initially assigning equal shares of acoustic observations to each phone in an utterance. Such an approach is called a flat-start. Lee[4] has shown experimentally that the uniform intervals used in a flat-start will eventually converge to their true ones during Baum-Welch reestimation. ### 3.1.2 Acoustic Model Training The HTK™ development philosophy is based on refining an evolving set of models incrementally. So, the development cycle is one of testing the current set, altering the architecture to balance the discrimination/trainability trade-off, and testing the new set to gauge improvement. With this approach in mind, it is convenient to start with a set of monophone models which, after sufficient training, can be transformed into context-dependent triphones. Monophone training begins by selecting a prototype HMM from which to generate models for the set of phonemes used. This is done by creating an HTK\textsuperscript{TM} format HMM file. HMM files contain specification of the parameterization expected by the model (cepstral coefficients, LPC coefficients, etc.), the number of states, the transition probability matrix, and initial statistics. We should note at this point that all of the model sets reported in this paper accept mel frequency cepstra as parameters, including difference and acceleration coefficients. Once this has been done, the global statistics of the speech data replace the initial statistics resulting in a new prototype which is then replicated to create each initial monophone model. A silence model is also created at this point. Its purpose is to model the silences which appear at the start and finish of most utterances. The word-level transcriptions are expanded using the pronouncing dictionary to generate phone-level transcriptions, which are then used in Baum-Welch reestimation; silence symbols are inserted in the beginning and end of each transcription. After several iterations of Baum-Welch a single-state short-pause model is created to model brief silences between words and short-pause symbols are inserted at the end of each word in every transcription. After several more iterations of Baum-Welch the monophones models are ready to be tested and cloned to generate context-dependent triphones. To facilitate triphone training, monophone transcriptions are converted to triphone transcriptions. This is followed by another set of Baum-Welch reestimation iterations. At this point a set of well-trained triphones has been generated. While these triphones cover all of the phonetic combinations appearing in the WSJ training data they do not in general comprise all of the triphones appearing in the application’s grammar. These unseen triphones are determined by expanding the word transcriptions of the application corpus to their triphone equivalent and recording all new triphones encountered. The existence of unseen triphones is a common problem in training. The consequence is that parameters belonging to the unseen triphones must be pooled with those belonging to triphones that are seen in the training data. The standard method of accomplishing this pooling within HTK™ is by way of tree-based clustering. Tree-based clustering is a method of merging model states according to phonetic similarity. Merging is controlled by using phonetic decision trees. Phonetic decision trees are a means of classifying a set of context-dependent models. Each node in the tree is assigned a phonetic question such as “is the left context a nasal?” or “is the right context a glottal stop?”. The questions always have yes/no answers, so the tree is binary in structure. To classify a state one starts at the root node and traverses the tree by applying each question encountered to the given model and taking the appropriate branch. When a terminal node is reached, the given state is merged with any other states sharing the same terminal node. The trees are constructed in a self-organizing manner. The user presents a list of candidate questions to the clustering tool and it assigns questions to nodes such that the likelihood of the data is maximized. The states of models seen in the training corpus are used to calculate this likelihood. The tree is extended until the increase in likelihood achieved with a branch is less than a user specified value. Then, when states of unseen models are sent through the tree, they are assured to be clustered with those of seen ones. After several more iterations of Baum-Welch a set of baseline triphones are ready for testing. In this project we considered three variations on the baseline set: 1) cross-word triphones, 2) pseudo-cross-word triphones, and 3) function-word triphones. Pseudo-cross-word triphones are generated by first determining the cross-word triphones appearing in the application corpus. Those triphones which also happen to appear in the WSJ word-internal triphone list can be modelled as such. The residual is modelled via tree-based clustering. So, they are, in effect, cross-words trained as word-internals. The advantage of using pseudo-cross-words is that unlike full-blown cross-words they do not substantially increase the number of models encountered in training, and therefore, maintain model robustness. We used K.F. Lee's 42 function-word set that was used in CMU's SPHINX system. [6] The modifications necessary to accommodate function-words consisted of modifying the pronouncing dictionary and assigning an HMM to each function word. We considered three-state, six-state, and nine-state topologies. The longer topologies were considered based on the logical assumption that models representing multiple phonemes have longer duration, and therefore, consume more acoustic observations than monophones. An evaluation of all the model sets just described is included in the next chapter. 3.1.3 Language Model Training As discussed earlier in the section on related work, parameter interpolation using POS bigrams was explored as a way to construct a reliable language model. Preliminary results indicated, however, that such an approach would not yield the advantages expected. Henceforth, we relied exclusively on the back-off bigram, whose parameters are readily computed using HTK\textsuperscript{TM} tools. The formula HTK\textsuperscript{TM} uses to estimate bigram probabilities is as follows, \[ P(j|i) = \begin{cases} (N(j|i) - D)/N(i), & \text{if } N(i,j) > t \\ b(i)p(j), & \text{otherwise.} \end{cases} \] where \[ p(j) = \begin{cases} N(i)/N, & \text{if } N(i) > u \\ u/N, & \text{otherwise.} \end{cases} \] and \[N = \sum_{i=1}^{L} \max[N(i), u].[14]\] Probability mass is transferred from the high occurring bigrams to the low occurring ones. Low occurring bigrams probabilities are backed-off to unigram frequencies. In some sense, this is a form of nonlinear smoothing. \(N(j|i)\) is the number of times word \(j\) follows word \(i\). \(N(i)\) is simply the number of times word \(i\) appears in the corpus. The \(b(i)\) factor insures that bigram probabilities sharing the same word history i sum to unity; it can also be seen as a scaling operation that emphasizes infrequent bigrams. The other quantities in the relation, \( D, t, \) and \( u \), are the only free parameters. The discount parameter \( D \) specifies how much probability mass will be extracted from frequent bigrams. The threshold \( t \) differentiates between frequent and infrequent bigrams, while the threshold \( u \) sets a floor on the minimum unigram count to avoid zero probabilities. Both parameters are user specified; their optimum values are dependent on the application corpus and must be determined empirically. Values were chosen to minimize the perplexity of the resulting language model. The perplexity was calculated using the HTK\textsuperscript{TM} tool \texttt{HSGen}. First, \texttt{HSGen} randomly generates a set of sentences according to the statistics of a given language model. The perplexity of this set is then computed. Since the sentences are generated randomly the perplexity will vary with each call to \texttt{HSGen}. The variance can be kept small, however, by configuring the tool to generate a large number of sentences; large sample sizes give low variances. We found that a variance under 3% of the sample mean was achieved when sampling 1,000 sentences. This yielded acceptable estimates of the true value. For the OR corpus we found the best values to be \( D = 0, t = 0, \) and \( u = 1 \). The resulting perplexity is about 4. The conclusions one can draw from this are as follows: 1) the differences between the highest and lowest frequencies were not sufficient to warrant discounting, 2) if a bigram appeared at least once it is best to use its relative frequency rather than backing-off to a unigram frequency, and 3) the bigram probability floor should be no less than \( \frac{1}{V} \), the a priori probability of a given word, where \( V \) is the vocabulary size. Backing-off to unigram frequencies is only beneficial in cases where there are no bigram frequencies. Often, minimizing the perplexity of a grammar results in a highly constrained language model which lacks the ability to predict new data. In cases where a less constrained grammar is desired, the HTK\textsuperscript{TM} recognizer can be invoked with an optional grammar scale factor \( s \). During the decoding process all language model probabilities will be raised to the power \( s \), allowing one to de-emphasize the language model to an arbitrary degree; the exponent operation non-linearly warps probabilities so that they Figure 3-1: Nonlinear mapping of probabilities when scale factor $s$ is applied to (log) probabilities. become closer in value. When $s$ is greater than unity probabilities are *squeezed* toward zero, whereas when $s$ is less than unity they are *squeezed* toward one. When language model probabilities get closer in value their impact on decoding is lessened. The drawback of this technique, not surprisingly, is an increased perplexity. Figure 3-1 depicts this nonlinear mapping. Having introduced the model development process, we are ready to discuss the integration of an HTK™ recognition kernel into the GUI to enable recognition of live speech input. ### 3.2 Enabling Live Input So far, we have discussed the technique used to synthesize an HMM set for the OR task domain. We now move on to address the issue of integrating the set into a live audio input system. As mentioned earlier, arriving at the desired model set for an application involves an iterative process of model adjustment followed by model evaluation. The later step is a means of deciding which combination of adjustments (e.g. the number of Gaussian mixtures to use and the degree of state clustering) yield the best performance. HTK™ provides a shell tool called HVite which performs Viterbi decoding on a set of input speech files. This off-line decoding is useful in testing a sequence of model sets using a fixed set of speech files. Alternatively, HVite can also control the audio device of its host to allow direct audio input. Thus, it serves as a useful building block for live-input applications. Via HTK\textsuperscript{TM}'s standard configuration files HVite can be configured to accept microphone or line input and speaker, line, or jack output. The main issue in controlling HVite is choosing the method of starting and stopping the record operation. HVite offers three ways to control recording which may be chosen via the configuration file. One method is key-press control, where the user can start and stop recording by pressing a specified key. Another employs a novel speech detector which intelligently records live speech. The last method relies on inter-process signalling to start and stop recording. For this project inter-process signalling was the most appropriate since it could be easily integrated into the existing graphical user interface. The recognizer output is sent to standard output (usually a terminal) and can optionally be written to external files. For this project we chose to copy the standard output to a file since its format was compact and more importantly, indicated the state of the recognizer. The simplest way to run HVite with inter-process signalling is to arrange for a parent process to spawn a child which, in turn, invokes HVite as a shell command. In our case, the GUI is the natural choice for the parent. The GUI can send signals to HVite when processing user requests to start/stop recording. Signalling is allowed to be asynchronous, complying with a user’s general desire to record at will. There are distinct periods, however, when HVite does not expect, and cannot handle, start/stop signals. Usually, sending untimely signals to such a process will produce unpredictable, and often, undesirable results (e.g. killing the receiving process). This was a design decision imposed by the authors of HVite and warrants the attention of the application developer using the program in direct audio mode. We now describe HVite’s direct audio functionality in more detail. Once invoked from a shell, HVite commences an initialization stage in which the recognizer loads model sets, pronouncing dictionaries, and word networks, as well as setting several parameters. It should be noted that initialization is performed once only. During this time user signals are not handled explicitly and must be blocked by interface software. On completion, the recognizer waits for a signal to begin recording followed by another to cease recording. Next, HVite begins recognizing and no longer handles user signals explicitly. Again, user signals must be blocked at these times for the reasons mentioned above. At this point, HVite has completed the cycle and waits for a user signal to start recording again. Because timing is critical when sending signals to HVite, it is convenient to treat the recognizer as a state machine; the recognizer responds to user signals in a manner that depends on its current state. A logical choice for recognizer states is 1) off, 2) initializing, 3) waiting, 4) recording, and 5) recognizing. Figure 3-2 shows the associated state diagram. We prohibit user signalling in states 1, 2, and 5 for reasons just mentioned. Signalling is unidirectional. That means that HVite does not inform its parent process as to its current state. The state must, therefore, be inferred from the standard output. It was mentioned above that HVite records its activity on standard output; it is a way of prompting the user for new audio input as well as displaying recognition results. This implies that standard output may be scanned to give not only recognition output, but also the current state of the recognizer. When the user elicits key actions (asynchronously), the state is updated automatically, based on standard output. We decided to encapsulate all state bookkeeping and low-level signalling into an abstraction layer appropriately named hvite-interface. The interface consists of five functions which enable the caller to query recognition status and initiate record signals safely. Their functionality is expained briefly below. See synopsis and description in Appendix A. startHVite takes the recognizer from the off to the initializing state. It is responsible for spawning a child process which, in turn, runs the HVite program. The child process invokes HVite in such a way that the program’s standard output is copied to a log file. This file is then used by the other interface functions to update the recognizer state so that subsequent user requests are processed appropriately; updating state during user calls means state is updated asynchronously. cleanupHVite brings the recognizer to the off state. This function kills the child process running HVite. signalHVite services a user request to signal HVite. If the recognizer is in the off, initializing, or recognizing states, this function takes no action and returns with a negative status. When in the waiting or recording states a signal is sent to HVite, which starts or stops record respectively. To ascertain the current state of the recognizer it checks the log file. It should be noted that HVite cannot handle arbitrary length recordings; the record process relies on limited memory resources. This makes it necessary to limit recording duration. To deal with this issue an optional timeout capability can be added to hvite-interface during compilation. It allows recording duration to be limited to a specified number of seconds. This is accomplished by requesting the operating system to send an alarm signal to the calling process after the desired period; this request is made whenever recording is started. Alternatively, timeout capability can be implemented by the user. This is useful when hvite-interface is used within an event-driven environment, where some standard system calls can create conflicts. In this project, the GUI which accessed hvite-interface is an X Windows\textsuperscript{TM} application; so, timeout capability was implemented within the GUI. \texttt{getRecogOut} If recognition output is ready, this function copies the current output to a user-specified array. If no output is available, then it returns with a negative status. \texttt{clrRecogOut} effectively clears the last recognition output read from the log file. By calling this function after every successful call to \texttt{getRecogOut}, one ensures that future calls to \texttt{getRecogOut} will return new output only. Figure 3-3 presents a flow diagram which demonstrates how these five functions are used in a live-input application. HVite-interface is a modular interface to the recognizing function of HVite that is simple to use within a user interface environment. Although it accomplishes the necessary tasks, it is rather inefficient as far as CPU time, memory requirements, and process activity are concerned. In Chapter 5, we propose a better solution that should alleviate these inefficiencies. Figure 3-3: hvite-interface flow diagram Chapter 4 Results This chapter is structured as follows. First, we discuss our evaluation methods. We then move onto the recognition performance of the monophone set, baseline triphone set, and three triphone variations; all five sets use single-mixture Gaussian densities. 4.1 Methods of Evaluation The language model used in evaluation is the same back-off bigram model mentioned in the previous chapter. The test utterances were generated by reading sentences taken from the OR corpus. There were 62 sentences, covering 680 words, and each was read once by a single test speaker. The test speaker did not participate in the recording of the WSJ corpus which was used to train acoustic models. Therefore, the results for this test set give a good indication of the extent to which each system is speaker-independent. Even so, all the text used in testing was also used to generate the language model. So, from a language model standpoint, this is not a fair test, in which test data is independent of training data. The decision to employ a subjective test arose from the fact that the 111 training sentences were not accompanied by a dedicated test set, making it the only alternative. Dividing the corpus into a training and test set was considered as a solution to this problem. This approach only introduced a new problem, however; due to the paucity of text data we could not find a training set which included examples of all the words in the vocabulary. Unseen data is inevitable when trying to partition such a small corpus into training and test sets; it also leads to poor language models. The objective measures used for evaluating each model set is word accuracy. This quantity is computed by comparing the recognized transcriptions to the true transcrip- tions. Before making this comparison, however, it is necessary to align the true and recognized transcriptions. This is accomplished by a dynamic programming string- alignment algorithm which tries to line-up words appearing in both transcriptions. The alignment makes it easier to distinguish between word-insertion, word-deletion errors, and word-substitution errors. Word accuracy is then, the ratio of the number of correctly recognized words, less insertions, to the number of words in the test transcription. That is, \[ A_w = \frac{R - I}{N}, \] (4.1) where \( R \) is the number of recognized words, \( I \) the number of insertions, and \( N \) the number of words in the test transcription. Sentence accuracy is also useful for assessing performance. It is the ratio of the number of correct sentences to the total number of sentences, where a sentence is considered correct only if all words in the sentence are recognized correctly. There is an HTK\textsuperscript{TM} tool called \texttt{HResults} which computes both of these statistics, along with insertion, deletion, and substitution counts. ### 4.2 Results Presented The results are shown in Table 4.1. The last three columns indicate the number of insertions, deletions, and substitutions, respectively. First of all, we see that there is considerable improvement when moving from monophones to word-internal triphones. This was expected, since the triphones are better at acoustic discrimination. There is little gain from switching to word-internal function-word triphones from plain word-internal ones. The function-word set only saved two insertions, both of <table> <thead> <tr> <th>Model Set</th> <th>Word Accuracy</th> <th>Sentence Accuracy</th> <th>I</th> <th>D</th> <th>S</th> </tr> </thead> <tbody> <tr> <td>monophone</td> <td>94.4%</td> <td>79.0%</td> <td>19</td> <td>1</td> <td>18</td> </tr> <tr> <td>baseline</td> <td>96.3%</td> <td>85.5%</td> <td>12</td> <td>2</td> <td>11</td> </tr> <tr> <td>X-word</td> <td>92.8%</td> <td>85.5%</td> <td>36</td> <td>0</td> <td>13</td> </tr> <tr> <td>pseudo X-word</td> <td>95.2%</td> <td>84.8%</td> <td>13</td> <td>5</td> <td>14</td> </tr> <tr> <td>function word</td> <td>96.6%</td> <td>85.5%</td> <td>10</td> <td>2</td> <td>11</td> </tr> </tbody> </table> Table 4.1: Model set results. Statistics shown are word-accuracy, sentence-accuracy, insertions (I), deletions (D), and substitutions (S). <table> <thead> <tr> <th>Model Set</th> <th># OR models</th> <th># WSJ models</th> <th>% OR seen</th> <th>% OR unseen</th> </tr> </thead> <tbody> <tr> <td>monophone</td> <td>41</td> <td>41</td> <td>100%</td> <td>0%</td> </tr> <tr> <td>baseline</td> <td>1,273</td> <td>7,894</td> <td>86%</td> <td>14%</td> </tr> <tr> <td>X-word</td> <td>1,210</td> <td>17,289</td> <td>91%</td> <td>9%</td> </tr> <tr> <td>pseudo X-word</td> <td>1,210</td> <td>7,894</td> <td>58%</td> <td>42%</td> </tr> <tr> <td>fun. word</td> <td>1,284</td> <td>7,933</td> <td>87%</td> <td>13%</td> </tr> </tbody> </table> Table 4.2: Model set statistics. which, as it turned out, were not even connected with errors associated with function-words. These results agree with the work of Lee[11, pp. 347-365], in which, word accuracy went from 95.1% to 95.2% with the addition of function-word models to a word-internal set. The results for cross-word triphones, however, are unexpected. True cross-word triphones normally produce better performance than word-internal triphones due to their ability to model coarticulation effects. In a demonstration released with HTK™, for instance, cross-words showed a 2% increase in word accuracy over word-internals, when applied to the Naval Resource Management corpus.¹ ### 4.3 Discussion Interpreting these results is straightforward if we consider the key attributes of the relevant model set. Table 4.2 shows some statistics on model set size and training ¹The Naval Resource Management corpus is a standard database used by ARPA for benchmarking. coverage which shed some light in this area. The second and third column show the number of triphones used in the OR transcriptions and WSJ training transcriptions. The fourth and fifth columns report the amount of OR triphones seen and unseen in the WSJ training transcriptions as a percentage of the total number of OR triphones. Notice how the number of WSJ training models doubles when moving from word-internal triphones to cross-word triphones. This doubling implies less data per parameter trained, which in turn, indicates a loss in trainability. It seems as though this loss counteracted any gains in the ability to deal with coarticulation. Pseudo cross-words clearly performed better than true cross-words. Let us see why. The training coverage is poorer for pseudo cross-words than for true cross-words (58% versus 91%). This implies that significantly fewer triphones could be trained explicitly (not through clustering) in the case of pseudo cross-words. Even so, this loss seems to be offset by the higher degree of trainability held by pseudo cross-words over true cross-words, which relies on twice the number of WSJ training models. In conclusion, the results show that whether a particular set of models, in practice, exhibit the advantages expected, is highly dependent on the training coverage associated with that model set. Indeed, lack of training coverage results in a loss in model robustness. For this project, function-word and word-internal triphones yielded the best performance. In the final translation system, we chose to use the baseline word-internal set, since it requires less training time and, therefore, facilitates the development of future speech input systems covering new task-domains. Chapter 5 Future Work The current system can be improved in the areas of both speech modelling and interfacing to HΤK™. We explore these areas below. 5.1 Speech Modelling The first aspect of speech modelling which can be improved greatly is evident from Table 4.2. It indicates that in the case of triphones there is a serious lack of training coverage. That is, a considerable percentage of OR triphones never occur in the WSJ training data and must, therefore, be trained implicitly via clustering. Clustering, however, is governed by human phonetic classifications, not by an optimality criterion. So, clustering is, in general, suboptimal and compromises performance; yet it is necessary when there are unseen triphones. An obvious, though not always tractable, solution would be to acquire an alternative acoustic modelling corpus which offered full training coverage of the OR triphones. A more time-intensive alternative to this, would be to generate one's own acoustic training data. This way, one has complete control over triphone coverage in the training data. A second aspect to be improved is the development test data. Recall that the test data used for final evaluation was also used for language model training. This data was used for development testing as well, which governed the model adjustment process. For this reason, the choice of model architecture was profoundly influenced by the level of performance achieved with a training set, which is not necessarily representative of new data. The need for a test set which is independent of the training set is unavoidable. It is only by using such a test set that model architecture can be chosen to enhance the performance on new data. An easy way of generating test data would be to simply concoct a set of test sentences by hand. These sentences should be produced by someone who is not directly involved in system development, so as to maintain an appropriate level of objectivity. An ideal candidate would be a military officer, or some other potential user of the system. 5.2 Interface to HTK As mentioned in Chapter 3, hvite–interface is reliable and easy to use, but is also rather inefficient. A better way to access HTK's recognition capabilities within an application, is to use HAPI, HTK's application programmer's interface (API), which recently became available. It provides a complete, seamless interface to all HTK functionality. When recording live-input, there is no need to create auxiliary processes. Also, CPU time is not squandered on standard features, which are found in all of HTK's command-line tools, and are often not needed. An added advantage of using HAPI is that the HTK recognition modules used in a HAPI application can be upgraded without the need to change existing application code.[8] Thus, using HAPI could reduce maintenance requirements as well as improve program efficiency. Appendix A Concise hvite-interface Manual NAME startHVite, cleanupHVite, signalHVite, getRecogOut, clrRecogOut SYNOPSIS #include <hvite-interface.h> int startHVite(void); void cleanupHVite(void); int signalHVite(void); int getRecogOut(char *str); void clrRecogOut(void); DESCRIPTION HVite-interface is an interface to the HTK™ command-line tool HVite (HTK™ Version 2.1.1 only), which performs Viterbi decoding for speech recognition applications. The purpose of the interface is to allow relatively easy access to HVite in its direct-audio input mode, which enables continuous speech recognition of live-input. In this mode, HVite provides complete audio recording capabilities, so separate recording processes are not necessary. Starting and stopping recording is accomplished by sending a pre-assigned signal to the process running HVite. The interface is responsible for gating user requests to start/stop recording so that signals are sent to HVite only when they will not endanger program integrity. startHVite() invokes HVite via a fork, stores process ID information, and initializes state. The function returns 0 on success and -1 on failure. Possible reasons for failure are 1) unable to spawn child processes or 2) HVite was already started with a previous call to startHVite(). Also note that multiple instances of the interface are not allowed to be accessed simultaneously. cleanupHVite() kills all child processes started by startHVite() and resets state. If HVite is not in use, no actions are taken. This function has no return value. signalHVite() can be used to request starting or stopping of record. Whether the request is interpreted as a start or stop request depends on the current state of HVite. This function returns 0 on success and -1 on failure. It fails when HVite is not ready to accept control signals or an attempt to signal the program failed. getRecogOut() copies recognition output, if available, to str and returns with 0 status. If output is not available, this function returns with -1 status. clrRecogOut() effectively clears the last recognition from interface memory. By calling this function after every successful call to getRecogOut(), one ensures that future calls to getRecogOut() will return new output only. In normal operation of the interface, first, startHVite() is invoked to start running HVite. signalHVite() is called whenever a start/stop record is desired. getRecogOut() can be used for querying recognition status and/or fetching current recognition output. getRecogOut() will normally appear in a polling routine that periodically checks for new recognition output. In this case, each call to getRecogOut() should be followed by a call to clrRecogOut() to reset the state of the interface. The record/recognize cycle can be repeated indefinitely. Bibliography
{"Source-Url": "http://dspace.mit.edu/bitstream/handle/1721.1/9862/41487179-MIT.pdf?sequence=2", "len_cl100k_base": 16261, "olmocr-version": "0.1.53", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 55489, "total-output-tokens": 18944, "length": "2e13", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.0008983612060546875, "__label__crime_law": 0.00042366981506347656, "__label__education_jobs": 0.0035190582275390625, "__label__entertainment": 0.00041604042053222656, "__label__fashion_beauty": 0.0002307891845703125, "__label__finance_business": 0.00024116039276123047, "__label__food_dining": 0.0004668235778808594, "__label__games": 0.0011224746704101562, "__label__hardware": 0.0028247833251953125, "__label__health": 0.0011987686157226562, "__label__history": 0.0004677772521972656, "__label__home_hobbies": 0.00012218952178955078, "__label__industrial": 0.00060272216796875, "__label__literature": 0.0012941360473632812, "__label__politics": 0.0004737377166748047, "__label__religion": 0.0006642341613769531, "__label__science_tech": 0.465087890625, "__label__social_life": 0.00015974044799804688, "__label__software": 0.01509857177734375, "__label__software_dev": 0.5029296875, "__label__sports_fitness": 0.0003914833068847656, "__label__transportation": 0.0007753372192382812, "__label__travel": 0.00020933151245117188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 76949, 0.03036]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 76949, 0.64118]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 76949, 0.91045]], "google_gemma-3-12b-it_contains_pii": [[0, 697, false], [697, 1732, null], [1732, 2255, null], [2255, 3935, null], [3935, 4112, null], [4112, 4724, null], [4724, 5154, null], [5154, 6755, null], [6755, 8884, null], [8884, 9121, null], [9121, 10394, null], [10394, 11917, null], [11917, 14003, null], [14003, 16321, null], [16321, 17868, null], [17868, 19145, null], [19145, 21272, null], [21272, 23117, null], [23117, 25316, null], [25316, 27370, null], [27370, 29497, null], [29497, 31876, null], [31876, 34286, null], [34286, 36610, null], [36610, 38832, null], [38832, 40516, null], [40516, 41933, null], [41933, 44154, null], [44154, 46449, null], [46449, 48832, null], [48832, 50649, null], [50649, 53182, null], [53182, 54466, null], [54466, 56864, null], [56864, 57949, null], [57949, 60204, null], [60204, 61629, null], [61629, 61670, null], [61670, 63025, null], [63025, 65088, null], [65088, 67225, null], [67225, 68958, null], [68958, 70280, null], [70280, 71854, null], [71854, 73211, null], [73211, 74675, null], [74675, 76049, null], [76049, 76949, null], [76949, 76949, null]], "google_gemma-3-12b-it_is_public_document": [[0, 697, true], [697, 1732, null], [1732, 2255, null], [2255, 3935, null], [3935, 4112, null], [4112, 4724, null], [4724, 5154, null], [5154, 6755, null], [6755, 8884, null], [8884, 9121, null], [9121, 10394, null], [10394, 11917, null], [11917, 14003, null], [14003, 16321, null], [16321, 17868, null], [17868, 19145, null], [19145, 21272, null], [21272, 23117, null], [23117, 25316, null], [25316, 27370, null], [27370, 29497, null], [29497, 31876, null], [31876, 34286, null], [34286, 36610, null], [36610, 38832, null], [38832, 40516, null], [40516, 41933, null], [41933, 44154, null], [44154, 46449, null], [46449, 48832, null], [48832, 50649, null], [50649, 53182, null], [53182, 54466, null], [54466, 56864, null], [56864, 57949, null], [57949, 60204, null], [60204, 61629, null], [61629, 61670, null], [61670, 63025, null], [63025, 65088, null], [65088, 67225, null], [67225, 68958, null], [68958, 70280, null], [70280, 71854, null], [71854, 73211, null], [73211, 74675, null], [74675, 76049, null], [76049, 76949, null], [76949, 76949, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 76949, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 76949, null]], "pdf_page_numbers": [[0, 697, 1], [697, 1732, 2], [1732, 2255, 3], [2255, 3935, 4], [3935, 4112, 5], [4112, 4724, 6], [4724, 5154, 7], [5154, 6755, 8], [6755, 8884, 9], [8884, 9121, 10], [9121, 10394, 11], [10394, 11917, 12], [11917, 14003, 13], [14003, 16321, 14], [16321, 17868, 15], [17868, 19145, 16], [19145, 21272, 17], [21272, 23117, 18], [23117, 25316, 19], [25316, 27370, 20], [27370, 29497, 21], [29497, 31876, 22], [31876, 34286, 23], [34286, 36610, 24], [36610, 38832, 25], [38832, 40516, 26], [40516, 41933, 27], [41933, 44154, 28], [44154, 46449, 29], [46449, 48832, 30], [48832, 50649, 31], [50649, 53182, 32], [53182, 54466, 33], [54466, 56864, 34], [56864, 57949, 35], [57949, 60204, 36], [60204, 61629, 37], [61629, 61670, 38], [61670, 63025, 39], [63025, 65088, 40], [65088, 67225, 41], [67225, 68958, 42], [68958, 70280, 43], [70280, 71854, 44], [71854, 73211, 45], [73211, 74675, 46], [74675, 76049, 47], [76049, 76949, 48], [76949, 76949, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 76949, 0.04819]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
c712b9ddcd5dad4edc90e76ef0eb7f5be670f184
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01243017/file/HOCore_ITP_final.pdf", "len_cl100k_base": 11544, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 55008, "total-output-tokens": 14277, "length": "2e13", "weborganizer": {"__label__adult": 0.0005102157592773438, "__label__art_design": 0.000766754150390625, "__label__crime_law": 0.0005064010620117188, "__label__education_jobs": 0.00241851806640625, "__label__entertainment": 0.00019252300262451172, "__label__fashion_beauty": 0.00025773048400878906, "__label__finance_business": 0.0005860328674316406, "__label__food_dining": 0.0007715225219726562, "__label__games": 0.0011844635009765625, "__label__hardware": 0.0009751319885253906, "__label__health": 0.0011806488037109375, "__label__history": 0.0005655288696289062, "__label__home_hobbies": 0.00018715858459472656, "__label__industrial": 0.0009183883666992188, "__label__literature": 0.0012359619140625, "__label__politics": 0.0005507469177246094, "__label__religion": 0.0009908676147460938, "__label__science_tech": 0.265380859375, "__label__social_life": 0.00022912025451660156, "__label__software": 0.008697509765625, "__label__software_dev": 0.71044921875, "__label__sports_fitness": 0.0003993511199951172, "__label__transportation": 0.0009565353393554688, "__label__travel": 0.00028324127197265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50077, 0.02359]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50077, 0.21583]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50077, 0.86941]], "google_gemma-3-12b-it_contains_pii": [[0, 1106, false], [1106, 3354, null], [3354, 6617, null], [6617, 9461, null], [9461, 12376, null], [12376, 15050, null], [15050, 18500, null], [18500, 21693, null], [21693, 24547, null], [24547, 27332, null], [27332, 31228, null], [31228, 34035, null], [34035, 37470, null], [37470, 40856, null], [40856, 44085, null], [44085, 47235, null], [47235, 50077, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1106, true], [1106, 3354, null], [3354, 6617, null], [6617, 9461, null], [9461, 12376, null], [12376, 15050, null], [15050, 18500, null], [18500, 21693, null], [21693, 24547, null], [24547, 27332, null], [27332, 31228, null], [31228, 34035, null], [34035, 37470, null], [37470, 40856, null], [40856, 44085, null], [44085, 47235, null], [47235, 50077, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50077, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50077, null]], "pdf_page_numbers": [[0, 1106, 1], [1106, 3354, 2], [3354, 6617, 3], [6617, 9461, 4], [9461, 12376, 5], [12376, 15050, 6], [15050, 18500, 7], [18500, 21693, 8], [21693, 24547, 9], [24547, 27332, 10], [27332, 31228, 11], [31228, 34035, 12], [34035, 37470, 13], [37470, 40856, 14], [40856, 44085, 15], [44085, 47235, 16], [47235, 50077, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50077, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
16f25040744a29d5219197d1bd18d72b25a84705
Is Your Testing N-wise or Unwise? Pairwise and N-wise Patterns in SystemVerilog for Efficient Test Configuration and Stimulus Jonathan Bromley Verilab Ltd, Edinburgh, Scotland jonathan.bromley@verilab.com Kevin Johnston Verilab Inc, Austin, Texas, USA kevin.johnston@verilab.com Abstract- Pairwise, and more generally N-wise, pattern generation has long been known as an efficient and effective way to construct test stimulus and configurations for software testing. It is also highly applicable to digital design verification, where it can dramatically reduce the number and length of tests that need to be run in order to exercise a design under test adequately. Unfortunately, readily available tools for N-wise pattern generation do not fit conveniently into a standard hardware verification flow. This paper reviews the background to N-wise testing, and presents a new open-source SystemVerilog package that leverages the language's constrained randomization features to offer flexible and convenient N-wise generation in a pure SystemVerilog environment. I. INTRODUCTION In this paper we claim that the technique known as pairwise testing (or, more generally, N-wise testing) is applicable to design verification (DV), where it can significantly reduce the amount of testing effort required to achieve confidence that a design has been adequately verified. We describe a new open-source package that supports N-wise testing within any class-based SystemVerilog verification environment, including those based on the UVM [1] or similar methodologies. Finally we report on the features and performance of our package relative to other available N-wise tools. Section II outlines the background to the problems that can be addressed by N-wise testing. Section III is a brief tutorial review of the fundamentals of N-wise testing and generation. Readers who are already familiar with these concepts may wish to skip those two sections and go directly to section IV, which shows by example how our SystemVerilog N-wise package can be applied in practice. Section VI gives some details of the internal implementation of our package, and section VII reports some measurements of its effectiveness relative to other available tools. Finally, section VIII summarizes our conclusions, and section IX provides references and further reading. II. BACKGROUND A. Exhaustive Testing is Impossible Any non-trivial device under test (DUT) has so much state that there is no way to test it exhaustively in any realistic period of time. Current best practice in verification relies on a number of well-established techniques for exploring those parts of the DUT’s state space, and the space of possible stimulus, that are most important for real applications and for discovering possible design errors. Constrained random stimulus generation, combined with carefully planned and measured functional coverage, aims to exercise all practically useful operational scenarios with maximum automation and minimum engineering effort. B. An Alternative Approach In this paper we argue that the technique known as N-wise testing is worthy of consideration alongside these established strategies, especially in handling the range of possible configurations or setups of a given DUT. We review existing practice in the field, and present an open-source SystemVerilog package that supports N-wise testing in a convenient and flexible way that fits easily into existing widely-used verification methodologies. C. The Problem of Configurability Almost every DV project requires that testing be performed on a varied set of configurations of the DUT. For typical design IP projects, the structure of the DUT (number of ports, presence or absence of features, etc.) is highly configurable by its users, and the IP vendor must take great care to confirm that all legal configurations will work as expected and that illegal or meaningless configurations are correctly trapped by assertions or other checks. Other designs may have a fixed structural configuration, but are programmable through registers or pin-strapping so that the design can operate in various modes, with the modes being altered relatively infrequently. In both cases it is a major verification challenge to ensure that the DUT’s configuration space has been thoroughly and realistically... exercised. It is not unusual to find that the space of possible configurations is itself large enough to make exhaustive testing impractical, even before any run-time activity has been exercised. For example, consider an IP block with between 2 and 8 bus interfaces, each of which can support 4 different protocols. Additionally, each port has a buffer whose depth can be chosen to be between 1 and 8 transactions. Already, with only this modest configurability, we have more than 1000 plausible configurations that a user might set up. In a high-quality verification project, any single configuration will be the subject of many hundreds of individual test cases in order to achieve good coverage of the verification plan. Configurability has dramatically enlarged the verification problem. D. Pragmatic Management of the Configurability Problem In practice, users typically choose a modest number of realistic configurations (including, as a minimum, those specified by key early customers) and run their complete test suite on those, targeting full coverage for each. Additionally, a randomized selection of configurations may be tested, but they are likely to be so numerous that it is impractical to apply the complete battery of tests to all of them, and so a limited subset of tests is used. This leads to a worrying dilemma. If a sufficiently large set of configurations is used, so that the team can claim good coverage of the configuration space, then it is likely that many of those configurations will have been inadequately tested. On the other hand, if each new configuration is tested to the point of full coverage according to the verification plan, this will be sufficiently time-consuming that there is little chance of covering the desired range of configurations. III. PAIRWISE AND N-WISE TESTING: BLAST CONTAINMENT FOR THE STATE EXPLOSION For many years, software testers - faced with configuration space challenges very similar to those described above - have turned to pairwise testing to bring the problem within reasonable bounds. The key idea behind pairwise testing is that difficult bugs are most commonly triggered by the interaction, or coincidence, of two parameters and not more. The reasoning goes like this (and is justified in much more detail in papers surveyed in reference [2]): - Although it is common to find bugs that are triggered when a single parameter has some special value, those bugs are almost always found quite early. It is not difficult to scan each individual parameter over its full range of values, testing at each different value of that parameter. - More difficult bugs require more than one parameter to take a specific value. For example, it may be that if buffer A has its minimum size, and buffer B has its maximum size, then there is a deadlock. It is clearly important to seek out these situations in an exhaustive manner. - Bugs that occur only when three or more parameters have specific values appear to be rare. It is neither practical nor useful to cover all such situations. 3-wise and higher order pattern generation is possible, and is supported by the techniques described in this paper. If a verification project's schedules permit, higher order N-wise pattern sets can be tested to reduce the risk of missing subtle corner-case bugs. Consequently, pairwise testing proposes that for any given set of parameters (or register values, or any other such adjustable values) we should construct a set of tests that exhaustively exercises every possible combination of any pair of parameters. In many cases this can be done with a surprisingly compact test set. By considering a rather small example we will show how effective pairwise test construction can be. First, however, it is useful to establish some terminology. A. Terminology used throughout this paper 1. Parameters, value-sets and cardinality Each individual variable (or module parameter, or register value, or other adjustable value) that participates in the test configuration is known as a parameter\(^1\). Each parameter has a discrete collection of possible values that we call its value-set. For example, a Boolean parameter has a value-set containing the two values TRUE, FALSE. The number of values in a parameter’s value-set is the parameter’s cardinality. A Boolean parameter has cardinality 2; an eight-bit parameter has cardinality 256. 2. Patterns In any given situation we have a collection of parameters, usually with differing cardinalities. This collection of parameters defines the problem space. A pattern in that problem space is a complete set of specific values, one value \(^1\) The word “parameter” has a specific meaning in SystemVerilog, and we considered using factor to denote an adjustable term in an N-wise set. However, use of “parameter” is so widespread in the literature of N-wise testing that we felt it would be confusing to use a different term here. Consequently we have accepted this new use of “parameter”, and we use the phrase module parameter if necessary to denote the SystemVerilog language feature. for each parameter. Each pattern represents a single test case or a single configuration to be tested. The total number of possible patterns is easily calculated: it is simply the product of the cardinalities of all parameters. 3. N-wise Groupings To perform pairwise testing, we must identify every possible pair of parameters. For example, if we have three parameters P1, P2 and P3, then we want to try every combination of values of parameters P1 and P2, every combination of parameters P2 and P3, and every combination of parameters P1 and P3. Each of these pairs (P1-P2, P2-P3, P1-P3) is a grouping. More generally, we can imagine N-wise testing in which we consider all possible values of every grouping of N parameters. There is also a degenerate (but nevertheless interesting) case of 1-wise testing, in which our groupings each contain only one parameter, and our aim is simply to test that every value of each parameter has been tested at least once (i.e. it has appeared in at least one pattern). 4. Combinations and pattern coverage For each grouping, we identify the complete set of all possible values of the group's N parameters. Each member of such a set is called a combination. A combination denotes specific values for each of the N parameters in its group, but also marks all remaining parameters as "don't care". If the parameters in a given N-wise grouping have cardinality \( C_1, C_2, \ldots, C_N \) then the number of combinations in the grouping is the product of its parameters' cardinalities: \( C_1 \times C_2 \times \ldots \times C_N \). If a pattern has the same values of a group's N parameters as does some combination, then we say that the pattern covers the combination. Clearly, a pattern can cover only one of a grouping's combinations. However, such a pattern usually covers combinations of other groupings too, and it is this multiple coverage that gives N-wise testing its power. The sole exception occurs when N is the same as the number of parameters; this is equivalent to demanding a pattern set containing every possible combination of every parameter. B. An Example of Pairwise Test Generation We now present a small but plausible example to illustrate the value of pairwise testing in reducing the number or length of tests that must be run. As already mentioned, pairwise (2-wise) is the most common and often the most useful form of N-wise testing. Consider a small DUT having an 8-bit configuration register. Although this register can be modified at any time, it affects a communications protocol and therefore in practice it will be set up and then left unaltered for an extended period during a test. Some tests must, of course, modify this register on the fly, but the majority of functional testing uses it in a static setup that is configured just once at the start of a test. This 8-bit register has 256 possible values, and therefore it would seem that at least 256 tests must be run to exercise all possible operating modes. However, on closer examination we find that the register is naturally split into several fields, each controlling one aspect of device operation. - field F1 (2 bits, 4 values) - field F2 (2 bits, 4 values) - field F3 (2 bits, 3 valid values, one illegal value) - field F4 (1 bit, 2 values) - field F5 (1 bit, 2 values) The restriction of F3 to only three values means that there are 192 rather than 256 meaningful patterns. Using pairwise testing, however, we need to exercise many fewer combinations. There are: - 16 possible combinations of F1 and F2, - 12 possible combinations of F1 and F3, - ... - 4 possible combinations of F4 and F5. Using the SystemVerilog implementation of N-wise generation described in this paper, pairwise coverage was achieved using only 16 test patterns, as shown in the following table. Inspection of the table shows that every possible combination of pairs of parameters has been covered. As an example, coverage of the 12 combinations of parameters F2 and F3 has been shaded, and coverage of the 4 combinations of F4 and F5 has been hatched. In this small example, pairwise test generation has reduced the number of setup configurations from 192 to only 16, a $12 \times$ improvement. Larger examples show much greater relative improvement. To illustrate this, consider a setup having 10 parameters each with 4 possible values. (Reference [2] would denote this as a "$4^{10}$" configuration.) Complete coverage of all possible combinations of these ten parameters requires more than a million test patterns. Our SystemVerilog implementation achieved pairwise coverage in only 32 patterns, a massive improvement of at least $30,000 \times$. ### Effectiveness relative to random generation Although our pairwise generation makes use of some randomization internally, it is deterministic in the sense that it is guaranteed to yield a set of patterns that fully cover all pairwise combinations. An alternative approach might be to use a pure random generator to construct test patterns. To compare the two approaches, we set up a simple covergroup in the class describing our five-parameter example with a cross of each pair of parameters. For example, cross coverpoint $F3 \times F4$ covered all possible combinations of the grouping of parameters $F3$ and $F4$. We then repeatedly randomized the object, sampling coverage after each randomization. After 16 randomizations (the number of patterns constructed by our pairwise generator), random generation had covered 100% of individual parameter values, but had been much less successful in covering pairwise combinations, as shown in the left-hand coverage report below. Only two of the 10 pairwise combinations were fully covered. Even after twice as many random tests (right-hand report), four of the 10 combinations were not yet fully covered. More than 50 randomizations were required to achieve 100% coverage of all pairwise combinations. By contrast, using pairwise testing would cover all the combinations with only 16 tests. ![Figure 1: Pairwise coverage only partly achieved by randomization](image) Randomized pattern generation left some pairwise cases uncovered even after quite extensive randomization. If any of those pairwise cases were important, then either a bug could be missed, or needless testing effort might be spent in hitting the case by randomization. ### Using the SystemVerilog Package Our goal was to make it as easy as possible for users to adopt pairwise testing in their SystemVerilog [3] verification environments, using data members of an ordinary SystemVerilog class to represent the parameters. In particular, we wanted users to be able to use familiar SystemVerilog data types for their parameters, with no restriction on variable names. Internally, though, the package needs to manipulate these parameters not as individual variables, but as an array of integers. Keeping these two representations in lockstep requires a significant amount of additional code injected into the user’s class, which is most conveniently done using macros. The configuration register example from section III.B above can be represented as in Code Example IV-1 below. To set up N-wise generation features, a user’s class must meet two requirements: <table> <thead> <tr> <th>Parameter</th> <th>F1</th> <th>F2</th> <th>F3</th> <th>F4</th> <th>F5</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>2</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>3</td> <td>2</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>4</td> <td>3</td> <td>2</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>5</td> <td>4</td> <td>3</td> <td>2</td> <td>1</td> <td>0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Test pattern</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> <th>16</th> </tr> </thead> <tbody> <tr> <td>F1</td> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td>0</td> <td>1</td> <td>2</td> <td></td> </tr> <tr> <td>F2</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>3</td> <td>3</td> <td>3</td> <td></td> </tr> <tr> <td>F3</td> <td>0</td> <td>1</td> <td>2</td> <td>1</td> <td>2</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>2</td> <td>0</td> <td>2</td> <td>0</td> <td>2</td> <td></td> </tr> <tr> <td>F4</td> <td>4</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td></td> </tr> <tr> <td>F5</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td></td> </tr> </tbody> </table> Table 1: Pairwise coverage of the 5-field example C. Effectiveness relative to random generation Although our pairwise generation makes use of some randomization internally, it is deterministic in the sense that it is guaranteed to yield a set of patterns that fully cover all pairwise combinations. An alternative approach might be to use a pure random generator to construct test patterns. To compare the two approaches, we set up a simple covergroup in the class describing our five-parameter example with a cross of each pair of parameters. For example, cross coverpoint $F3 \times F4$ covered all possible combinations of the grouping of parameters $F3$ and $F4$. We then repeatedly randomized the object, sampling coverage after each randomization. After 16 randomizations (the number of patterns constructed by our pairwise generator), random generation had covered 100% of individual parameter values, but had been much less successful in covering pairwise combinations, as shown in the left-hand coverage report below. Only two of the 10 pairwise combinations were fully covered. Even after twice as many random tests (right-hand report), four of the 10 combinations were not yet fully covered. More than 50 randomizations were required to achieve 100% coverage of all pairwise combinations. By contrast, using pairwise testing would cover all the combinations with only 16 tests. <table> <thead> <tr> <th>CROSS</th> <th>After 16 randomizations</th> <th>CROSS</th> <th>After 32 randomizations</th> </tr> </thead> <tbody> <tr> <td>pwcg::f1Xf2</td> <td>68.7%</td> <td>pwcg::f1Xf2</td> <td>87.5%</td> </tr> <tr> <td>pwcg::f1Xf3</td> <td>66.6%</td> <td>pwcg::f1Xf3</td> <td>91.6%</td> </tr> <tr> <td>pwcg::f1Xf4</td> <td>87.5%</td> <td>pwcg::f1Xf4</td> <td>87.5%</td> </tr> <tr> <td>pwcg::f1Xf5</td> <td>87.5%</td> <td>pwcg::f1Xf5</td> <td>100.0%</td> </tr> <tr> <td>pwcg::f2Xf3</td> <td>75.0%</td> <td>pwcg::f2Xf3</td> <td>91.6%</td> </tr> <tr> <td>pwcg::f2Xf4</td> <td>75.0%</td> <td>pwcg::f2Xf4</td> <td>100.0%</td> </tr> <tr> <td>pwcg::f2Xf5</td> <td>87.5%</td> <td>pwcg::f2Xf5</td> <td>100.0%</td> </tr> <tr> <td>pwcg::f3Xf4</td> <td>83.3%</td> <td>pwcg::f3Xf4</td> <td>100.0%</td> </tr> <tr> <td>pwcg::f3Xf5</td> <td>100.0%</td> <td>pwcg::f3Xf5</td> <td>100.0%</td> </tr> <tr> <td>pwcg::f4Xf5</td> <td>100.0%</td> <td>pwcg::f4Xf5</td> <td>100.0%</td> </tr> </tbody> </table> - a generator object, automatically derived from base class \texttt{Nwise\_base}, must be created using package-provided macro \texttt{NWISE\_BEGIN}. - variables representing N-wise parameters must be declared using package-provided macros \texttt{NWISE\_VAR\_INT} or \texttt{NWISE\_VAR\_ENUM}. An additional user constraint has been added to the example for illustration purposes. ``` 'include "nwise_pkg.svh" // Compile the package if necessary. Make macros available. package config_info_pkg; import nwise_pkg::*; typedef enum {PARITY\_NONE, PARITY\_ODD, PARITY\_EVEN} parity\_e; class Config; User's Nwise-enabled class 'NWISE\_BEGIN(Config, nwise\_gen) Declares Nwise generator object \texttt{nwise\_gen} 'NWISE\_VAR\_INT(int, F1, {{0:3}}) // Equivalent to \texttt{rand int F1}; Value-set [0:3] must be specified 'NWISE\_VAR\_INT(int, F2, {{0:3}}) // No value-set specification for enums Value-set is optional for variables 8 bits or less 'NWISE\_VAR\_ENUM(parity\_e, F3) 'NWISE\_VAR\_INT(bit, F4) 'NWISE\_VAR\_INT(bit, F5) constraint c\_no\_parity\_if\_F1\_zero { (F1==0) -> (F3==PARITY\_NONE); } string name; function new(string nm); name = nm; endfunction function void print(); $display("Config(%s) has parity=%s", name, F3.name); endfunction endclass endpackage ``` \textbf{Code Example IV-1: Using nwise\_pkg to implement the example of section III.B} \begin{itemize} \item \textbf{A. The NWISE\_VAR... macros} Each \texttt{NWISE\_VAR...} macro invocation is, in effect, the declaration of a \texttt{rand} data member. The first argument is the variable's data type. The second argument is its variable name. \textbullet\texttt{NWISE\_VAR\_INT} declarations can be of any integral data type, but it is also necessary to specify a value-set for the variable. This follows the same syntax as for an inside constraint: braces containing a comma-separated list of values and ranges. The current implementation throws a fatal runtime error if the value-set contains more than 256 different values. If the data type is no more than 8 bits wide, the value-set argument can be omitted and is inferred as the full range of possible 2-state values of that data type. \textbullet\texttt{NWISE\_VAR\_ENUM} declarations specify an enumeration type and a variable name. The value-set is inferred from the enum type's declaration. Each of the above macros also has an \_ARRAY variant allowing the user to specify a fixed-size array of parameters all having the same data type and value-set. \item \textbf{B. Constraints over N-wise parameters} As illustrated by constraint \texttt{c\_no\_parity\_if\_F1\_zero}, constraints can be imposed on the relationships among the N-wise parameters. The constraint illustrated here (which, for simplicity, was not applied in our earlier example) limits the value of parity parameter \texttt{F3} if parameter \texttt{F1} is zero. The generation algorithm honors all such constraints when generating an N-wise test pattern set. \item \textbf{C. Additional properties} As illustrated by the example's additional methods and data member \texttt{name}, there is no restriction on what properties may be added to such a user class, nor on how the N-wise variables are used in other methods. However, it is very strongly recommended that the class have no other \texttt{rand} data members, as this could disturb the generation algorithm in potentially undesirable ways. \end{itemize} D. Generating and rendering an N-wise set The user should now create an object of the new class type, and call methods of its generator object to perform generation and rendering of test patterns. The first step, generation, is usually performed only once by a single call to the `generate_patterns` method, specifying the N-wise order of generation (defaulted to 2 for pairwise). Generation builds an internal table of parameter patterns, and returns the number of generated patterns. User code can then call the `render` method to set up the object's data members to match any one of these patterns. The following example simply displays a table of generated values, using the built-in `pattern_debug_string` method. It assumes that the code in Code Example IV-1 has already been compiled. ``` module ConfigTest; import config_info_pkg::*; initial begin Config cfg; int num_patterns; cfg = new("DemoConfig"); num_patterns = cfg.nwise_gen.generate_patterns(2); for (int p = 0; p < num_patterns; p++) begin cfg.nwise_gen.render_pattern(p); $display("pattern %2d: %s", p, cfg.nwise_gen.pattern_debug_string(p)); end end endmodule ``` Code Example IV-2: Using the class from Code Example IV-1 Although this example does nothing more than print the patterns on the console, normal usage would be to make use of each new pattern in turn to perform some testing. An alternate method `render_pattern_new` is capable of creating a completely new object with the desired contents, rather than (as in the example) modifying the existing generator object. The output from this example is shown below. Note that it differs somewhat from the results in Table 1 because of the additional user constraint that appears in Code Example IV-1 above. <table> <thead> <tr> <th>Pattern</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>{ F1:1, F2:2, F3:PARITY_ODD, F4:0, F5:0 }</td> </tr> <tr> <td>1</td> <td>{ F1:3, F2:1, F3:PARITY_NONE, F4:0, F5:1 }</td> </tr> <tr> <td>2</td> <td>{ F1:2, F2:3, F3:PARITY_ODD, F4:1, F5:1 }</td> </tr> <tr> <td>3</td> <td>{ F1:3, F2:3, F3:PARITY_EVEN, F4:1, F5:0 }</td> </tr> <tr> <td>4</td> <td>{ F1:1, F2:1, F3:PARITY_ODD, F4:1, F5:0 }</td> </tr> <tr> <td>5</td> <td>{ F1:0, F2:0, F3:PARITY_NONE, F4:0, F5:0 }</td> </tr> <tr> <td>6</td> <td>{ F1:1, F2:2, F3:PARITY_NONE, F4:1, F5:0 }</td> </tr> <tr> <td>7</td> <td>{ F1:2, F2:1, F3:PARITY_EVEN, F4:0, F5:1 }</td> </tr> <tr> <td>8</td> <td>{ F1:0, F2:2, F3:PARITY_NONE, F4:1, F5:0 }</td> </tr> <tr> <td>9</td> <td>{ F1:1, F2:0, F3:PARITY_EVEN, F4:1, F5:1 }</td> </tr> <tr> <td>10</td> <td>{ F1:0, F2:0, F3:PARITY_EVEN, F4:0, F5:1 }</td> </tr> <tr> <td>11</td> <td>{ F1:1, F2:3, F3:PARITY_NONE, F4:0, F5:1 }</td> </tr> <tr> <td>12</td> <td>{ F1:3, F2:2, F3:PARITY_EVEN, F4:1, F5:1 }</td> </tr> <tr> <td>13</td> <td>{ F1:0, F2:1, F3:PARITY_NONE, F4:0, F5:0 }</td> </tr> <tr> <td>14</td> <td>{ F1:1, F2:3, F3:PARITY_EVEN, F4:0, F5:0 }</td> </tr> <tr> <td>15</td> <td>{ F1:2, F2:0, F3:PARITY_ODD, F4:1, F5:0 }</td> </tr> <tr> <td>16</td> <td>{ F1:0, F2:3, F3:PARITY_NONE, F4:0, F5:1 }</td> </tr> <tr> <td>17</td> <td>{ F1:2, F2:2, F3:PARITY_ODD, F4:0, F5:1 }</td> </tr> </tbody> </table> Table 2: Output from Code Example IV-2 V. OTHER CONSIDERATIONS A. UVM integration The package fits comfortably into a UVM environment. The `NWISE_VAR_*` macros can coexist with UVM field automation macros without difficulty. The generator object that implements N-wise functionality is added to a user class by composition, not by inheritance, so the user class is free to inherit from `uvm_object` or any other base class without restriction. B. Planned enhancements Reference [4] suggests two important additional features that may be useful in practical applications: - Ability to include pre-determined test patterns - possibly from a previous run, possibly from known configurations that must be tested as part of the verification plan. Generation will add only enough patterns to complete the requested N-wise coverage. Mixed-strength testing, in which (for example) pairwise testing is made more robust by having some subset of the parameters in a 3-wise grouping. Our implementation can readily support these features, but we plan to gather more experience with the package before defining an API to support them conveniently. C. Potential drawbacks Bach and Schroder's paper [5] on possible methodological pitfalls of pairwise testing offers valuable guidance, and raises some important questions. However, the authors do not find its arguments against N-wise testing totally compelling, and remain persuaded that N-wise can be a valuable technique even though, like any other approach, it must be used with care and awareness of its limitations. VI. IMPLEMENTATION DETAILS Our implementation is organized as a single SystemVerilog package, nwise_pkg, and a collection of macros for automated code generation. As indicated in the code examples of section IV, each user variable that will participate in N-wise generation must be a data member declared using a macro invocation whose arguments specify the variable's data type and name. These macros arrange for elements of a value-proxy array of int to be kept in lockstep with the values of the user's declared variables. In this way, construction of the N-wise patterns can proceed without knowledge of the names and data types of the user's variables, but the user's constraints on relationships among those variables are nevertheless honored. A. Automation Macros The macros (one of the NWISE_VAR... family) not only declare the data member as a rand variable, but also inject extensive automation code supporting the N-wise generation. Code Example VI-1 indicates the most important parts of the code generated by a sample invocation of macro NWISE_VAR_INT. ```verbatim `NWISE_VAR_INT( shortint, MyVar, {5, [10:19]} ) ``` expand to: ```verbatim class __NWISE_VAR_MyVar_PROXY extends nwise_pkg::IntSetVarUtils#(shortint); randc shortint x; constraint c {x inside {5, [10:19]};} endclass rand shortint MyVar; const int __value_proxy_MyVar_idx = __value_proxy_next_idx(__NWISE_VAR_MyVar_PROXY::create("MyVar")); constraint __c__value_proxy__MyVar__ { MyVar inside {5, [10:19]}; value_proxy[__value_proxy_MyVar_idx] == MyVar; } ``` Code Example VI-1: Expansion of NWISE_VAR_INT macro Code Example VI-1 shows that the macro expands into two major sections. The first is the declaration of a utilities class derived from IntSetVarUtils. The second section is a declaration of the named member as a rand variable, with its chosen data type and with a constraint limiting it to have one of the specified set of values. This section also contains an initialized declaration of a const variable that is set up to contain the index number (in array value_proxy) of an array element that will represent the variable within the N-wise generation code. The create method that is called by this initialization serves many purposes. It constructs an object of the utilities class, establishes its index in the proxy array, and passes on the variable's name as a string so that it can be reported properly in debug messages. The utility class contains a constraint limiting the value of its randc variable to the same set of values as specified for the variable of interest. This constraint is used to construct a list of the complete set of possible values. B. Avoidance of object bloat The N-wise package macros inject a significant amount of code into the user's class. However, great care has been taken to minimize their burden on the user class's memory footprint. The N-wise generator object, which contains several potentially large arrays, is constructed lazily when the pattern generation functions are invoked, and therefore imposes no penalty if it is unused. It is therefore completely reasonable to use an N-wise enabled class as an ordinary transaction object in UVM or similar environments. VII. PERFORMANCE N-wise generation is a nontrivial mathematical problem, but many solutions have been described in the literature surveyed at reference [2]. Our development was not primarily focused on computing the N-wise pattern set, which has been widely covered elsewhere. Rather, we gave priority to integrating the generation smoothly into a typical SystemVerilog verification flow, including that of the Universal Verification Methodology (UVM) [1]. At first, though, we chose to create a tabula rasa generator implementation, to help reinforce our own learning experience. Our "home-grown" generation algorithm is functionally correct (it leaves no desired combinations uncovered), and on realistic problems it typically yields test pattern sets that are only 10% to 20% larger than those from the best available tools. Its runtime is pleasingly fast on small problems (about one second to generate pairwise patterns for 10 parameters each of cardinality 10), but scales rather poorly with increasing problem size; for pairwise generation on a problem with N parameters each of cardinality C, its runtime scales as \( N^4 C^4 \). We also attempted to implement the PICT heuristic described in Czerwonka’s excellent description of pairwise testing in software practice [4]. The freely available PICT tool [6], which we presumed uses that heuristic, has very impressive runtime performance that scales well with problem size. However, our re-implementation of the algorithm described in [4] gave disappointing run times, at least a factor of 3 slower than our own algorithm. We do not yet have an explanation for this observation, and plan to continue to work on it until we understand it fully. VIII. CONCLUSIONS N-wise test patterns are a valuable tool in design verification, especially for covering a configuration space. The implementation presented here is particularly flexible thanks to its use of the SystemVerilog constraint language, and has a low barrier to adoption because the addition of N-wise generation facilities to a user class imposes only minimal restrictions on other contents and usage of that class. We believe that our approach integrates well with existing methodologies and flows, and therefore presents useful benefits to digital design verification engineers when compared with other available implementations that do not fit naturally into a typical verification flow. Our implementation remains preliminary, with several interesting enhancements in preparation, and work continuing to improve its performance and capacity. Full source code and details of the implementation, including documentation of its API, can be found at [7]. It is made available under the liberal Apache open source license [8]. IX. BIBLIOGRAPHY
{"Source-Url": "https://dvcon-europe.org/sites/dvcon-europe.org/files/archive/2015/proceedings/DVCon_Europe_2015_TA1_1_Paper.pdf", "len_cl100k_base": 8647, "olmocr-version": "0.1.42", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29821, "total-output-tokens": 8974, "length": "2e13", "weborganizer": {"__label__adult": 0.0003542900085449219, "__label__art_design": 0.0003464221954345703, "__label__crime_law": 0.00027370452880859375, "__label__education_jobs": 0.0004193782806396485, "__label__entertainment": 5.14984130859375e-05, "__label__fashion_beauty": 0.0001494884490966797, "__label__finance_business": 0.0001672506332397461, "__label__food_dining": 0.0002684593200683594, "__label__games": 0.0005350112915039062, "__label__hardware": 0.002567291259765625, "__label__health": 0.00039267539978027344, "__label__history": 0.00018584728240966797, "__label__home_hobbies": 0.00010728836059570312, "__label__industrial": 0.0005025863647460938, "__label__literature": 0.00016164779663085938, "__label__politics": 0.00022518634796142575, "__label__religion": 0.0004343986511230469, "__label__science_tech": 0.0248260498046875, "__label__social_life": 6.55055046081543e-05, "__label__software": 0.005706787109375, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.0002968311309814453, "__label__transportation": 0.0004897117614746094, "__label__travel": 0.0001729726791381836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35170, 0.04272]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35170, 0.48917]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35170, 0.86671]], "google_gemma-3-12b-it_contains_pii": [[0, 4344, false], [4344, 9433, null], [9433, 13506, null], [13506, 19908, null], [19908, 23586, null], [23586, 27287, null], [27287, 31228, null], [31228, 35170, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4344, true], [4344, 9433, null], [9433, 13506, null], [13506, 19908, null], [19908, 23586, null], [23586, 27287, null], [27287, 31228, null], [31228, 35170, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35170, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35170, null]], "pdf_page_numbers": [[0, 4344, 1], [4344, 9433, 2], [9433, 13506, 3], [13506, 19908, 4], [19908, 23586, 5], [23586, 27287, 6], [27287, 31228, 7], [31228, 35170, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35170, 0.20536]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
4cbaa7f082c465619cef9388d0cbee1862faea50
A Portable High-Productivity Approach to Program Heterogeneous Systems Zeki Bozkus Dept. of Computer Engineering Kadir Has Üniversitesi Istanbul, Turkey zeki.bozkus@khas.edu.tr Basilio B. Fraguela Depto. de Electrónica e Sistemas Universidade da Coruña A Coruña, Spain basilio.fraguela@udc.es Abstract—The exploitation of heterogeneous resources is becoming increasingly important for general purpose computing. Unfortunately, heterogeneous systems require much more effort to be programmed than the traditional single or even multi-core computers most programmers are familiar with. Not only new concepts, but also new tools with different restrictions must be learned and applied. Additionally, many of these approaches are specific to one vendor or device, resulting in little portability or rapid obsolescence for the applications built on them. Open standards for programming heterogeneous systems such as OpenCL contribute to improve the situation, but the requirement of portability has led to a programming interface more complex than that of other approaches. In this paper we present a novel library-based approach to programming heterogeneous systems that couples portability with ease of use. Our evaluations indicate that while the performance of our library, called Heterogeneous Programming Library (HPL), is on par with that of OpenCL, the current standard for portable heterogeneous computing, the programming effort required by HPL is 3 to 10 times smaller than that of OpenCL based on the authors’ implementation of five benchmarks. Keywords-programmability; heterogeneity; portability; libraries I. INTRODUCTION The relevance of the usage of computing devices with very different characteristics that cooperate in a computation has increased exponentially in the past few years. The reason for this has been the appearance of accelerators which can be programmed to perform general-purpose computations which can achieve large speedups over traditional CPUs and even multicore CPUs. These accelerators, such as GPUs [1] or the Synergistic Processing Elements (SPEs) of the Cell BE [2], always coexist with one or more general-purpose CPUs, giving place to heterogeneous computing systems. Unfortunately this hardware heterogeneity is also reflected in the software required to program these systems since, unlike in the case of regular CPUs, with these types of accelerators programmers are typically exposed to a number of characteristics and limitations that must be handled. For this reason, they cannot be programmed using the standard languages used for general-purpose CPUs, but rather require extended versions [3][4][5][6] which demand different semantics and the specification of many details such as buffers, transfers and synchronizations. Moreover, most of these tools are specific to one vendor or even to one family of devices, which severely restricts the portability of the codes and places into question the effort needed for their development. Libraries that complement some of these languages in order to improve programmability have been developed [7][8][9], but either their scope of application is restricted or their interface to program arbitrary computations in the GPU is inconvenient and requires the usage of the GPU-specific languages. Lastly, proposals have been put forward based on compiler directives [10][11][12], which obviously require special compiler support and whose performance is highly dependent on compiler technology. Additionally, all of the alternatives that we are aware of are either solutions restricted to a single vendor or have not yet been implemented. In this paper we present a portable library-based approach for the usage of heterogeneous systems that focuses on delivering high programmer productivity while allowing low level control. Our library, called Heterogeneous Programming Library (HPL) and developed in C++, is built on two key concepts. First, it allows for the definition of functions that are evaluated in parallel by multiple threads on different inputs for regular CPUs, hardware accelerators, or both. We call these functions kernels, as they are analogous to those found in [4][5][6]. The second concept is data types that allow for the expression of both scalars and arrays of any number of dimensions that can be used both in serial portions of the code as well as in kernels. The rest of this paper is structured as follows. The following section introduces the hardware view and the programming model provided by our library. Its programming interface is explained in Section III, followed by an illustration with examples of increasing complexity in Section IV. Then, Section V evaluates our proposal. This is followed by a discussion on related work in Section VI. The paper presents our conclusions and future work in Section VII. II. HARDWARE AND PROGRAMMING MODEL The main purpose of the Heterogeneous Programming Library (HPL) is to improve the productivity of programmers who want to be able to exploit the computational power of heterogeneous devices without compromising the portability of their applications. In this way, HPL reveals a programming model which enables the rapid development of parallel applications, which is suitable for any computational device, from sequential processors to many-core systems, and which is focused first on the application parallelism, and only later on its mapping on a specific platform. For these reasons, the HPL programming model is quite simple and intuitive, and it does not result in complicated patterns of parallelism and interdependencies between tasks. Rather, the expression of parallelism is limited to a reduced and well-structured set of constructs that are effective and platform-independent. These ones are in fact the characteristics that were found to be more desirable for a unified programming model for many-core systems in [13]. According to their classification, the HPL programming model belongs to the family of the application-centric generic programming models. A programming model requires a minimal model of the underlying hardware. The abstract view of the hardware on which HPL applications run is depicted in Figure 1. There is a host with a memory and a single CPU in which the sequential portions of the application are executed. Attached to it, there are a number of computing devices, each one of them with its own memory and a number of processors that can only access the memory within their device. While different devices can run different codes, all the processors in the same device must execute the same code in an SPMD fashion. In some devices the processors are subdivided in groups that share a scratchpad memory of a limited size and can synchronize by means of barriers, this being the only mechanism available to synchronize processors within a device. This model is basically a simplified version of the one proposed by OpenCL [6], which also aims for maximum portability. Notice that the computational devices in this model need not be special hardware accelerators. A traditional cluster of multi-core nodes, either homogeneous or heterogeneous, could fit perfectly in this model. One possibility would be to map each node to a device whose processors are the cores within the node. Another possibility would be to conceptualize each core in the cluster as an independent device. Given this view, a HPL application consists of traditional sequential regions, which are executed in the host, and portions of code that are run in an SPMD fashion in one or several devices. The main program run in the host manages the transfers of data between the host and the devices and requests the execution of the parallel regions of the application in the different devices. These parallel portions of the application are expressed as functions that are evaluated in parallel by the processors in the selected device. These functions are called kernels, since they are analogous to the kernels found in [4][5][6]. Each thread that runs a copy of a kernel needs a unique identifier so that it can identify the work it is responsible for. To allow for this, kernels are executed on a domain of integers of up to three dimensions, called a global domain. Each point in this domain is assigned a unique identifier that is associated to an instance of the requested kernel, and therefore the size of this domain is the total number of parallel threads running the requested kernel. The user can optionally specify a local domain, which must have the same number of dimensions as the global domain and whose size in every dimension must be a divisor of the size of the corresponding dimension of the global domain. The threads whose identifiers belong to the same local domain can share scratchpad memory and synchronize by means of local barriers. These threads form what we call a group of threads, each group also having an n-dimensional identifier. Figure 2 represents the unique global identifiers of the 32 threads to run for a global domain of 4 × 8 threads. The identifiers of threads that belong to the same local domain (or group) of 2 × 4 threads are surrounded by a thicker line. The unique identifier of each thread group is also indicated. As we see again these concepts are very similar to those in [4][6] and can be mapped to any computational device. This execution model supports in a straightforward manner the data parallel programming model. Task parallelism can be provided by requesting the parallel evaluation of different kernels on different devices. Lastly, regarding memory, kernels can only access the processor’s registers and the memory available inside the device where they are run. HPL distinguishes three kinds of device memory. First, we have the standard memory, which is shared by all the processors in the device both for reading and writing. HPL calls this memory global memory because it is accessible by all the threads in the same global domain. Second, there is the scratchpad memory, which can be only accessed by the threads that belong to the same local domain. We call it the local memory for this reason. Finally, there is a memory for data that can be written by the host but which the kernels can only read, which is therefore called constant memory. III. PROGRAMMING INTERFACE As explained in the preceding section, our C++ HPL library allows for expression and execution of arbitrary user-defined kernels on any of the computational devices available in the system. Since these kernels must be compiled so that they can be run on any device requested by the user, they cannot be expressed using the native C++ data types and control structures, as this would result in their regular compilation as standard code to be run in the host. Rather, they are written in standard C++, but using datatypes, functions and macros provided by HPL. Thanks to the usage of these tools, our library is able to build from the original C++ expressions code that can be compiled at runtime for the desired device. Our current implementation of the library generates OpenCL C [6] versions of the HPL kernels, which are then compiled to binary with the OpenCL compiler. As a result, our library can be used to perform computations on any device that is supported by OpenCL. Since this is the open standard for the programming of heterogeneous systems, and it is already supported by a large number of heterogeneous systems including GPUs, the Cell Broadband Engine and standard multicore, this allows in turn the widespread and portable usage of HPL. This was indeed the main reason for choosing this platform as the backend for HPL, although we do not exclude using other backends for different platforms in the future. We now explain in turn the HPL datatypes, the syntax required to build its kernels, and the interface to request their execution, followed by example programs that illustrate all these points. All the HPL keywords and types are provided to the user program by the inclusion of the header file HPL.h, and they are encapsulated inside the HPL namespace in order to avoid collisions with other program objects. A. Data Types The HPL datatypes encapsulate and provide to the library the information it needs to manipulate the data involved in the heterogeneous computations as automatically as possible. This includes array sizes, kind of memory where a data structure should be allocated, availability of copies of a data structure in different devices, etc. The arrays that are used in kernels must belong to the HPL datatype Array<type, ndim [, memoryFlag]>. This is a C++ template where the type is the standard C++ type for the elements of the array and ndim is the number of dimensions of the array. The third argument indicates the kind of device memory in which the array must be allocated. It is only needed when the array is to be located in constant or local memory, their respective flag values being Constant and Local. When this flag is not specified, the array is allocated in global memory (Global flag). A special situation takes place when an Array is defined within a kernel. Since a kernel is a function, all the variables defined within it are local, and therefore, private to each given evaluation of the kernel. For this reason, all the variables defined inside a kernel that do not specify a memory flag are totally private, even if their final physical location is the global memory of the device. The constructor for an Array takes as inputs the sizes of its dimensions, and, optionally, a pointer to the raw data in the host memory in case the array had been previously allocated, which also implies the user is responsible for its deallocation. Otherwise HPL takes care of the allocation and deallocation of the storage required by the array. Scalars can be defined by using the Array template with ndim=0, but HPL provides for convenience a series of types (Int, Uint, Double, ...) to define scalars of the obvious corresponding C++ type. HPL arrays are indexed inside kernels using square brackets, just like standard C++ arrays. Nevertheless, they are indexed with parenthesis in the host code. The reason for this difference is that while the code in HPL kernels is dynamically compiled, and therefore optimized, this is not the case for the host code, which is only statically compiled. As a result, accesses to variables that belong to HPL datatypes within HPL kernels have no overheads, while the accesses in host code suffer the typical overhead associated with user-defined datatypes [14]. The usage of a different kind of indexing helps the programmer to be aware of this cost and to identify more quickly whether a portion of C++ code is a parallel kernel or not. The user can avoid the indexing overhead in the host code by requesting the native pointer to the Array data in the host memory, which is provided by the method data(), and accessing the data through the pointer. B. Kernel Syntax As discussed at the beginning of this section, the control flow structures used inside kernels must be written using HPL keywords so that the library can capture them and thereby generate the appropriate code for them. HPL provides the usual C++ control flow constructs (if, for, ...) with three differences. First, their names finish with an underscore (if_, for_, ...). Second, the end of each block must be closed with a corresponding statement (endif_, endfor, ...). Lastly, the arguments to for_ are separated by commas instead of semicolons. A second point of interest for writing HPL kernels is the existence of variables with predefined meanings. As Section II explained, kernel executions take place on a global domain of up to three dimensions on which local domains can be optionally defined. The predefined variables idx, idy and idz correspond to the value of the global domain associated with the current execution of the kernel in the first, second, and third dimension of the global domain, respectively. In this way, these variables allow for a unique identification of the current execution of the kernel. Similarly, lidx, lidy and lidz provide the local identification of the thread within its local domain in the first, second and third dimensions of the problem, respectively. For example, in Figure 2, the threads with global id (1,2), (1,6), (3,2) and (3,6) all have the local id (1, 2) within their local domain. The current group of threads can be identified by means of the variables gidx, gidy and gidz, each one of them providing the identification in each one of the dimensions of the domain. HPL also provides similar variables for the global and the local sizes of the execution of the current kernel as well as the number of groups of threads in every dimension of the domain. Finally, HPL provides a series of functions to perform typical computations and tasks within the kernels. A particularly important function is barrier, which performs a barrier synchronization between all the threads in a group. This function supports an optional argument to specify whether the threads need to have a consistent view of the local memory (argument LOCAL), the global memory (argument GLOBAL), or both (LOCAL|GLOBAL), after the barrier. Notice that the global and the local memory are different and separate, thus the consistency of one of them does not imply anything on the state of the other one. C. Kernel Invocation HPL kernels are functions that are evaluated in parallel on a domain. These functions communicate with the host by means of their arguments, which are HPL arrays or scalars. While the scalars are always passed by value, the arrays are passed by reference, and therefore are the mechanism to return the results of the computation. The syntax to request the parallel evaluation of a kernel is eval(kernelfunction)(arg1, arg2, ...). By default the kernel is evaluated in the first device found in the system that is not a standard general-purpose CPU, the global domain of the evaluation of a kernel is given by the dimensions of its first argument, and the local domain is chosen by the library. The optional methods global and local in between eval and the kernel arguments can be used to specify the global domain and the local domain of the evaluation, respectively. For example, in order to evaluate kernel f with the argument a in the default device using the global and local domain sizes illustrated in Figure 2 one would write eval(f).global(4,8).local(2,4)(a). As we explained in Section II, the main program that runs in the host is responsible for managing the data transfers and launching the kernels for execution in the different devices. For this reason, the eval function can only be used in host code. IV. HPL Example Codes Three codes of increasing complexity are now used in turn to illustrate the HPL syntax described in Section III as well as the usual programming style and strategies implied by the programming model explained in Section II. The description of the codes is detailed enough to try to enable any C++ programmer with an average proficiency and no previous experience with the programming of heterogeneous systems to begin to exploit the advantages of these systems thanks to HPL. This is in fact one of the main purposes of our work. A. SAXPY Let us begin with an HPL implementation of SAXPY, which computes $S = aX + Y$, where $S$, $X$ and $Y$ are vectors and $a$ is a scalar. A complete program in HPL for this computation using double-precision floating point data is shown in Figure 3. After including the HPL.h header, we indicate that we will operate with objects defined in the C++ HPL namespace in line 3. Then two vectors suitable for use in HPL kernels are defined in line 7. In one of them, x, the library is responsible for its allocation and deallocation. For the second one, y, an existing regular C++ vector myvector provides the storage. The scalar variable used in SAXPY is defined in line 14 with the suitable HPL type Double. The HPL kernel for SAXPY is the saxpy function in lines 9 to 11. As we explained in Section III-C, HPL kernels only communicate with the host by means of their arguments. For this reason the elements that participate in the computation must be parameters of the kernel and the return type of the function must be void. In our implementation each execution of the kernel computes a single element of the destination vector. This way, to perform SAXPY on vectors of \( N \) elements, a global domain of a single dimension and \( N \) elements must be used, so that the kernel with unique identification \( 0 \leq \text{idx} < N \) is in charge of the computation for the \( \text{idx} \)-th elements of the vectors, as reflected in line 10. Let us remember that \( \text{idx} \) is a predefined variable that provides the value of the first dimension of the global domain associated to the current execution of the kernel. Since this problem has a single dimension, \( \text{idx} \) suffices to identify uniquely a kernel execution. Note that we use the vector \( Y \) to store the result \( S \). The invocation of the kernel takes place in line 18, where neither the global nor the local domain for the execution of the kernel are provided. As we explained in the pre- ceding section, by default the global domain is given by the number of dimensions and sizes of the first argument, which perfectly fits this example. As for the local domain, it can be chosen by the library, as this code does not use or make assumptions on it because the computation of each kernel execution is completely independent, that is, there is no cooperation between the threads that belong to the same group or local domain. \subsection{B. Dot Product} The program shown in Figure 4 is a somewhat more complex example that illustrates the usage of most HPL features introduced in the preceding sections. This program computes the dot product of two vectors of single-precision floating point elements of length \( N \) in two stages. First, a HPL kernel computes in parallel the partial dot products associated to subregions of \( N \) consecutive elements of the arrays. The result is an array of \( nGroup = N/M \) floating point values which are reduced in the host in the second stage. Notice how this array, called \( pSums \), is indexed with square brackets in the HPL kernel in line 19, but with round parenthesis in the host code in line 33. The reasons for this have been explained in Section III-A. The vectors \( v1 \) and \( v2 \) whose dot product will be com- puted as well as the intermediate vector \( pSums \) are defined in line 25 as HPL arrays, since they will be used in the kernel. The kernel, written in function \( \text{dotp} \), is invoked in line 30 with the syntax we have just explained. The strategy followed by our implementation is to launch \( N \) parallel kernel executions, so that the \( \text{idx} \)-th thread will be in charge of reading and multiplying the \( \text{idx} \)-th elements of the input arrays. Then, the threads in each group of \( M \) threads, which is uniquely identified by the variable \( gidx \), will cooperate to compute the partial dot product by means of the scratchpad memory they share. For this reason, our evaluation specifies a global domain of \( N \) elements and a local domain of \( M \) elements. The \( \text{dotp} \) function is written with the HPL syntax. Here, the array \( \text{sharedM} \) is declared with the \text{Local} flag to place this array on the scratchpad memory shared by the threads that belong to the same local domain. Its purpose is to store the result of multiplication operations of the input arrays. A barrier is used in line 15 to synchronize the threads in the local domain and ensure that the writing of the \( \text{sharedM} \) array in the local memory has been completed after the barrier. After this, the first thread of each group, whose \( \text{ldx} \) is zero, performs the partial sums in the location associated with the group. We could have implemented a much more efficient reduction, using for example a binary tree of parallel reductions. However, we have followed this approach for the sake of clarity. \subsection{C. Sparse Matrix Vector Product} Sparse matrix vector multiplication (spmv) is a common primitive at many scientific applications. For example, this operation is the most computationally expensive part of the Conjugate Gradient (CG) code of the NAS Parallel Benchmarks suite, and in fact it is part of the benchmarks chosen by the SHOC Benchmark [15] suite to characterize heterogeneous systems (although it does not appear in [15] it \begin{verbatim} #include "HPL.h" #define N 256 #define M 32 #define nGroup (N/M) using namespace HPL; void dotp<Array<float>,1> v1, Array<float,1> v2, Array<float,1> pSums) { Int i; Array<float, 1, Local> sharedM(M); sharedM[ldx] = v1[idx] * v2[idx]; barrier(LOCAL); if ( ldx == 0 ) { for ( i = 0, i < M, i++ ) { pSums[gidx] += sharedM[i]; } } main(int argc, char **argv) { Array<float, 1> v1(N), v2(N), pSums(nGroup); float result = 0.0; //v1 and v2 are filled in with data (not shown) eval(dotp).global(N).local(M)(v1, v2, pSums); for(int i = 0; i < nGroup; i++) result += pSums(i); std::cout << "Dot(" << result << "\n;" } Figure 4. Dotproduct example in the HPL syntax. \end{verbatim} Algorithm and Matrix transpose were taken from the AMD APP SDK. Finally, the sparse matrix vector multiplication (spmv) and reductions OpenCL benchmarks were extracted from the SHOC Benchmark suite [15]. We chose these five benchmarks because they vary largely in terms of ratio of computations to accesses to memory, access patterns, and degree of interaction required between the parallel threads that evaluate the kernels. This way they cover a wide spectrum of application behaviors and, as we will see in Section V-B, they achieve very different degrees of improvement when their parallel portions are executed on a GPU compared to their serial execution in a regular CPU. Moreover, we chose them from different sources in order to minimize the impact of coding style differences on the programmability provided by our library by comparing some benchmark examples written both with HPL and OpenCL. The second set of experiments compares the runtime performance of HPL with that of OpenCL. The third category evaluates the HPL performance on different platforms for portability. We wrote all the HPL versions of our codes by ourselves. This was also the case for the OpenCL version of the EP benchmark from the NAS Parallel Benchmark suite [16]. The OpenCL versions of the Floyd-Warshall algorithm and Matrix transpose were taken from the AMD APP SDK. Finally, the sparse matrix vector multiplication (spmv) and reductions OpenCL benchmarks were extracted from the SHOC Benchmark suite [15]. We chose these five benchmarks because they vary largely in terms of ratio of computations to accesses to memory, access patterns, and degree of interaction required between the parallel threads that evaluate the kernels. This way they cover a wide spectrum of application behaviors and, as we will see in Section V-B, they achieve very different degrees of improvement when their parallel portions are executed on a GPU compared to their serial execution in a regular CPU. Moreover, we chose them from different sources in order to minimize the impact of coding style differences on the programmability. The compiler used in all the tests was g++ 4.3.3 with optimization level O3. V. Evaluation This section evaluates HPL using OpenCL as comparison point for two reasons. Firstly, since it is the open standard for programming heterogeneous systems, it is the natural alternative to HPL for the portable development of applications for these systems. Secondly, since OpenCL is the backend that HPL currently uses, it is natural to wonder which is the overhead that HPL imposes with respect to the manual usage of OpenCL. Our evaluation consists of three categories. First, we measure the programmability provided by our library by comparing some benchmark examples written both with HPL and OpenCL. The second set of experiments compares the runtime performance of HPL with that of OpenCL. The third category evaluates the HPL performance on different platforms for portability. was added later to the suite). For these reasons we will also use this computation in our evaluation in the next section. Figure 5(a) shows the main loop of the spmv kernel for a sequential code where the sparse matrix is stored using the compressed sparse row (CSR) format. Figure 5(b) presents the corresponding HPL code for spmv. This code is a good example of heterogeneous computing using HPL. Here, the CPU works sequentially to make the CSR format, as it is more suitable to perform this task; later, the heavy duty and naturally parallel computation part is written with HPL so that it can be run on a device. In this code, a group of local threads identified by the predefined variable gidx is responsible for the multiplication of a row from the sparse matrix A with vector vec. Each group performs the reduction required to compute the result in the out vector for the row by summing the elements of the vector sdata on the local memory. A. Programmability There is not a straightforward or universally accepted way to determine the benefits for programmability of the usage of a given programming approach. In this paper we have used Sloccount [17], which counts the number of source lines Table 1 SLOCs for the OpenCL and HPL versions of the benchmarks and reduction in SLOCs due to the usage of HPL <table> <thead> <tr> <th>Benchmark</th> <th>OpenCL</th> <th>HPL</th> <th>Reduction</th> </tr> </thead> <tbody> <tr> <td>EP</td> <td>1151</td> <td>281</td> <td>75.6%</td> </tr> <tr> <td>Floyd-Warshall</td> <td>1170</td> <td>107</td> <td>90.9%</td> </tr> <tr> <td>Matrix transpose</td> <td>455</td> <td>52</td> <td>88.6%</td> </tr> <tr> <td>Spmv</td> <td>1637</td> <td>517</td> <td>68.4%</td> </tr> <tr> <td>Reduction</td> <td>773</td> <td>218</td> <td>71.8%</td> </tr> </tbody> </table> Figure 6. Speedups of the GPU executions of the OpenCL and HPL versions of EP over the sequential execution in a CPU for different problem sizes Figure 7 shows the speedup of all the benchmarks we implemented when they are run in the GPU using OpenCL and HPL, the baseline being a serial execution in the CPU of our system of the corresponding benchmark written and compiled with regular C++. The benchmarks and problem sizes are: EP class C, the Floyd-Warshall algorithm applied on 1024 nodes, the transposition of a 16K×16K matrix, the spmv code for a 16K×16K matrix with 1% of non zeros and the addition of 16M single-precision floating point values. The speedups were computed as in Figure 6 for the reasons explained above. We can see that depending on the degree of parallelism, the regularity of the accesses and ratio of computations to memory accesses, we have of code excluding comments and empty lines (SLOC), to measure the programmability of HPL and OpenCL. SLOC is a very effective software metric to estimate the amount of effort that will be required to develop a program, as well as to forecast the programming productivity or maintainability once the software is produced. The SLOC results are reported in Table I for the five different benchmarks written with both HPL and OpenCL. From this data, we can see that HPL outperforms OpenCL with programs that are 3 to 10 times shorter. The main reason for this result is that OpenCL requires the manual setup of the environment, management of the buffers both in the device and host memory and the transfers between them, explicit load and compilation of the kernels, etc. On the other hand, all these necessary steps are highly automated and hidden from the user in HPL. B. Runtime Performance In this section we performed some experiments to show the performance differences of HPL and OpenCL. We used a Tesla C2050/C2070 GPU as experimental platform. The device has 448 thread processors with a clock rate of 1.15 GHz and 6GB of DRAM and it is connected to a host system consisting of 4xDual-Cores Intel 2.13 GHz Xeon processors. Figure 6 shows the speedup of the execution on the GPU of EP both when using OpenCL and HPL with respect to the serial execution of a standard C++ version of the code in the CPU for different problem sizes. The speedup is computed taking into account the generation of the backend code (in the case of HPL) and the compilation and execution of the kernel, but not the transfers between the GPU and the main memory. The reason is that the transfer time is basically the same for OpenCL and HPL, as they both use the same OpenCL functions and runtime for this. Since the main purpose of this evaluation is to analyze the performance difference between these two approaches, disregarding the transfers allows to identify the difference between them more clearly, even if it is a bit unfair to HPL. Still, in most of our benchmarks, and particularly in EP, the transfer time is minimal compared to the computation time, which is why we have chosen this benchmark to illustrate the performance difference between both programming environments as a function of the problem size. Given the embarrassingly parallel nature of EP and its regular access patterns the GPU always provides large speedups with respect to the CPU in Figure 6. However, what interests us most here is that the HPL performance is very similar to that of OpenCL. For the smallest problem size, W, HPL is 20.5% slower than OpenCL, but in absolute terms the execution time only goes from 0.044 to 0.053 seconds. It is very important to outline at this point that HPL stores internally and reuses the binaries of the kernels it generates. This way, second and later invocations of an HPL kernel do not incur in overheads of analysis, backend code generation and compilation, and as a result they achieve runtimes virtually identical to those of OpenCL when reusing a previously compiled kernel. Kernels that require short computing times are usually written to be run in heterogeneous devices only if the program will use them several (typically, many) times. Therefore this behavior of our library dilutes the overhead of the first invocation on all the subsequent usages of the kernel that are finally performed. The absolute difference in runtime between OpenCL and HPL remains in similar values for larger problem sizes. This results in run-time slowdowns for HPL, that is, increases of its runtime with respect to the OpenCL version, of only 5.7%, 2.3% and 1.1% for the classes A, B and C, respectively. This happens even when the largest runs are not long either. For example the GPU run for class C with OpenCL is just 2.81 seconds. Figure 7 shows the speedup of all the benchmarks we implemented when they are run in the GPU using OpenCL and HPL, the baseline being a serial execution in the CPU of our system of the corresponding benchmark written and compiled with regular C++. The benchmarks and problem sizes are: EP class C, the Floyd-Warshall algorithm applied on 1024 nodes, the transposition of a 16K×16K matrix, the spmv code for a 16K×16K matrix with 1% of non zeros and the addition of 16M single-precision floating point values. The speedups were computed as in Figure 6 for the reasons explained above. We can see that depending on the degree of parallelism, the regularity of the accesses and ratio of computations to memory accesses, we have chosen benchmarks with a wide range of performances in an accelerator such as a GPU. This way the speedups found for the OpenCL codes range from 5.4 for spmv to 257 for EP. The most interesting fact for us is that for all of them the performance achieved by HPL is very similar to that of OpenCL. This can be seen more clearly in Figure 8, which represents the slowdown of HPL with respect to OpenCL in these executions in the GPU. We can see that the typical degradation is below 4%. This degradation is mostly due to time required by our library to capture the computations expressed in the HPL kernels, analyze them to decide which data transfers between memories will be needed due to their execution, and finally generate the corresponding OpenCL C codes. Additionally, these codes may also be slightly less efficient than OpenCL C versions written by hand in some situations. If the transfer time between CPU and GPU is taken into account in the performance comparison between HPL and OpenCL, the results are basically the same as in Figure 8 except for matrix transpose. In this benchmark these transfers consume a long time compared to the transposition itself, and since they require the same time in HPL and OpenCL, the overhead of HPL compared to OpenCL is reduced to only 0.41%. This is in contrast with the 3.47% shown in Figure 8. C. Portability Results In order to illustrate the portability of HPL across different devices, we run our benchmarks choosing for the execution of the kernels a Quadro FX 380 (16 thread processors with a clock rate of 700 MHZ and 256 MB of DRAM) that is connected to the same host. EP was not part of this set of experiments because it requires double-precision floating point calculations, which are not supported by this device. Also, due to its smaller memory we had to reduce the problem size of Floyd-Warshall to 512 elements, and the matrix transposition was performed on matrices of 5K × 5K elements. The spmv code was performed on a 8K × 8K matrix with a 1% of non zeros. Figure 9 shows the overhead of these HPL runs compared to those of the same benchmark under OpenCL in our two GPUs. It is clear that HPL performance is again very similar to that of manually programmed OpenCL. The precision of the measurement of such a minimal performance difference is subject to the usual small variations observed in different performance measurements for the same code and inputs. This explains the small changes (≤ 2%) with respect to Figure 8 in the Tesla. The relevant conclusion is that HPL overhead is minimal for both devices. VI. RELATED WORK The most widely used tools to program computing systems with accelerators are new languages which are extended versions of C (sometimes C++), with a series of related libraries and a runtime system. Brook+ [5] and the C/C++ extensions for the Cell BE [3] are good examples of this trend, although CUDA [4] has been the most successful to date. All these tools force programmers to write their applications with new languages, to deal with varying levels of low level detail (depending on the language), and worse, to be restricted to a single kind of accelerator or, in the best case, the devices provided by a single vendor. A separate mention should be made of the more recent OpenCL [6] which, although in this group, contrary to the others, is an open royalty-free standard for general purpose parallel programming across regular CPUs and all kinds of hardware accelerators. OpenCL has been chosen for this reason as the backend for the current implementation of our library. Some of these environments come with libraries that can interoperate with them and which improve programmability for certain kinds of problems. For example Thrust [7] facilitates the expression on CUDA of many algorithms, but it has numerous restrictions compared to our library. It only allows for the manipulation of unidimensional arrays, its computations must always be one-to-one, i.e., a single element from each input array can be processed to generate a single element of one output array, it does not allow for the use of local or constant memory or the specification of the number of threads to run, etc. EPGPU [9] is an interesting library focused on OpenCL with fewer limitations than Thrust. In exchange, the user-defined computations to be run in OpenCL must be written in that language inside macros that build the complete kernels. This implies that EPGPU kernels must not only be constant at compile time, but also include inside them all the definitions of the constants they use, as they are actually only strings. HPL nevertheless captures in its kernels variables and macros that are defined outside them, which makes programming more natural and less verbose. For similar reasons, EPGPU does not analyze the kernels it generates, as it would amount to developing a compiler for the OpenCL C strings it manipulates. HPL nevertheless can and does analyze the kernels it builds, the aim of that analysis currently being the minimization of the data transfers due to the execution of the kernels. The different focus between EPGPU and HPL is partially illustrated by the naive matrix transpose implementations\(^1\) shown for them in Figures 10(a) and 10(b), respectively. EPGPU facilitates enormously the usage of OpenCL when its restriction are fulfilled. In its code OpenCL is clearly displayed with the usage of some of its keywords (\texttt{__global}) or the appearance of its limitations in the kernels, such as the requirement to use linear indices to access the multidimensional arrays not defined inside the kernels (see line 3). HPL on the other hand abstracts away completely the backend used for the kernels and avoids these restrictions, resulting in a much more natural integration in the host language. Other related libraries are PyCUDA and PyOpenCL [8], which provide convenient interfaces to Python to perform numerous predefined computations on accelerators. They also allow for the expression of custom computations on these devices, although they require strings of CUDA or OpenCL code and they must be element-to-element computations or reductions. Although, as of today, it only targets general-purpose multicore systems, the Intel Array Building Blocks framework [18] is similar to HPL in that it also compiles at runtime arbitrary computations that the programmer expresses in standard C++ using a series of data containers and macros it provides. Other differences with HPL are its programming model (no local domains, groups of threads, etc.) and features, as for example it does not allow for the control of the task granularity, nor specify different kinds of memory or synchronizations in the parallel codes. Finally, contrary to our application-centric approach, it is a hardware-centric programming model according to [13]. Proposals to program heterogeneous systems by means of compiler directives [10][11] that try to replicate the success of OpenMP [19] in homogeneous multicore systems have also been put forth. The limitations of compiler directives are well known. When the directives do not allow the programmer to specify with sufficient detail the transformations desired, the user does not have enough information about the transformations performed by the compiler. The result is a lack of a clear performance model, and, therefore, of the ability to reason on the performance attained by the application [20]. Second, the compiler technology might not be developed enough to find the best low level implementation for the application in many situations. These two problems, which were behind the lack of success of HPF [21], are particularly important for hardware accelerators, as they allow for many kinds of optimizations and are very sensitive to them. An approach based on compiler directives that tries to avoid these problems will probably require a non-negligible number of directives, clauses and specifications. \(^1\)These implementations do not correspond to the matrix transpose benchmark used in Section V, which optimizes the process by making contiguous reads and transposing blocks of the matrix in the local memory shared by each group of threads. in order to achieve good performance in an heterogeneous system. This is particularly true given the enormous gap between the semantics of the regular sequential code in which the directives are to be inserted and the execution models in the accelerators, as well as the large number of possible implementations of the same algorithm, and even of specifiable parameters for each implementation, in these devices. Lastly, the alternatives mentioned above only generate CUDA code, and therefore they can only target the accelerators of a single vendor. A standard interface for the parallelization on heterogeneous systems based on compiler directives has been recently proposed [12] but to date it has not been implemented. VII. CONCLUSIONS AND FUTURE WORK In this paper we have presented the design and implementation of the Heterogeneous Programming Library (HPL), which provides a programming environment with an interface embedded inside C++ for the programming of heterogeneous platforms. HPL is designed to maximize the programmability of these systems by hiding from the user the complexities related to the usage of these platforms (buffers, transfers, synchronizations, . . . ) that are found in other approaches. Our proposal also avoids the learning curve of new languages. Rather, it uses only standard C++ features, so that programmers can continue to use the compilers and tools they are familiar with. Despite this, HPL provides a very expressive and powerful syntax with an impressive abstraction to write parallel functions to take advantage of parallel architectures. Our experiments demonstrate that HPL is a powerful alternative to OpenCL, the current standard for portable heterogeneous computing. HPL outperformed OpenCL by 3 to 10 times on programmability and productivity metrics, while it typically only experienced a degradation on performance way below 5%. We believe that the enormous benefit of programmability of HPL outweighs this minor performance degradation. HPL or future similar approaches will increase the much needed usability of high performance heterogeneous platforms. We are working to add new features to HPL in order to improve further the programmability by providing functions for typical patterns of computation. Additionally, we plan to extend the high-productivity features of HPL to handle distributed memory parallelism by running HPL on a cluster of SMP nodes in which each node can contain multiple heterogeneous computing devices. ACKNOWLEDGMENT This work was funded by the Xunta de Galicia under the project ”Consolidación e Estructuración de Unidades de Investigación Competitivas” 2010/06 and the MICINN, cofunded by FEDER funds, under the grant with reference TIN2010-16735. Basilio B. Fraguela is a member of the HiPEAC European network of excellence and the Spanish network CAPAP-H, in whose framework this paper has been developed. REFERENCES **Biographies** Zeki Bozkus received the M.S. and the Ph.D. degrees in computer science from Syracuse University, NY, USA, in 1990 and 1995, respectively. He worked as a senior compiler engineer at the Portland Group, Inc. for six years. He worked as a senior software engineer at Mentor Graphics for the parallelization of Calibre product line for 11 years. He is now an assistant professor at the Computer Engineering Department of Kadir Has University since 2008. His primary research interests are in the fields of parallel programming algorithms, parallel programming languages, and compilers. Basilio B. Fraguela received the M.S. and the Ph.D. degrees in computer science from the Universidade da Coruña, Spain, in 1994 and 1999, respectively. He is an associate professor in the Departamento de Electrónica e Sistemas of the Universidade da Coruña since 2001. His primary research interests are in the fields of programmability, analytical modeling, design of high performance processors and memory hierarchies, and compiler transformations. His homepage is [http://gac.udc.es/~basilio](http://gac.udc.es/~basilio)
{"Source-Url": "http://gac.udc.es/~basilio/papers/Bozkus12-HPL.pdf", "len_cl100k_base": 10061, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 37946, "total-output-tokens": 11731, "length": "2e13", "weborganizer": {"__label__adult": 0.0004775524139404297, "__label__art_design": 0.0004973411560058594, "__label__crime_law": 0.00040221214294433594, "__label__education_jobs": 0.0011148452758789062, "__label__entertainment": 9.995698928833008e-05, "__label__fashion_beauty": 0.0002084970474243164, "__label__finance_business": 0.00028014183044433594, "__label__food_dining": 0.0004203319549560547, "__label__games": 0.0008287429809570312, "__label__hardware": 0.004207611083984375, "__label__health": 0.0007486343383789062, "__label__history": 0.0003762245178222656, "__label__home_hobbies": 0.00015413761138916016, "__label__industrial": 0.0007963180541992188, "__label__literature": 0.0002543926239013672, "__label__politics": 0.00035500526428222656, "__label__religion": 0.0008187294006347656, "__label__science_tech": 0.08258056640625, "__label__social_life": 9.28044319152832e-05, "__label__software": 0.00531005859375, "__label__software_dev": 0.89794921875, "__label__sports_fitness": 0.0004355907440185547, "__label__transportation": 0.001087188720703125, "__label__travel": 0.00027251243591308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50957, 0.01969]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50957, 0.48644]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50957, 0.92233]], "google_gemma-3-12b-it_contains_pii": [[0, 5006, false], [5006, 9761, null], [9761, 15464, null], [15464, 20261, null], [20261, 25494, null], [25494, 29679, null], [29679, 35520, null], [35520, 38576, null], [38576, 43736, null], [43736, 48855, null], [48855, 50957, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5006, true], [5006, 9761, null], [9761, 15464, null], [15464, 20261, null], [20261, 25494, null], [25494, 29679, null], [29679, 35520, null], [35520, 38576, null], [38576, 43736, null], [43736, 48855, null], [48855, 50957, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50957, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50957, null]], "pdf_page_numbers": [[0, 5006, 1], [5006, 9761, 2], [9761, 15464, 3], [15464, 20261, 4], [20261, 25494, 5], [25494, 29679, 6], [29679, 35520, 7], [35520, 38576, 8], [38576, 43736, 9], [43736, 48855, 10], [48855, 50957, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50957, 0.03153]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
53fbf02550898911ccc77f46ffb696a89185831d
The state of the art of macroprogramming in IoT: An update Iwens G Sene Jr [Universidade Federal de Goiás] iwens@ufg.br Thalia S. Santana [Universidade Federal de Goiás] thaliassantana15@gmail.com Renato F. Bulcão-Neto [Universidade Federal de Goiás] rbulcao@ufg.br Barry F. Porter [Lancaster University] b.f.porter@lancaster.ac.uk Received 30 November 2021 • Accepted 7 July 2022 • Published 18 November 2022 Abstract Macroprogramming’s primary goal is to increase developers’ productivity by providing high-level specifications of applications’ behaviour at the system level. Macroprogramming may be a viable solution for developing complex IoT applications, such as those manipulating high data volume and heterogeneity. This paper updates a recent work identifying and analysing primary research on macroprogramming in IoT through a systematic literature mapping (SLM). We extended the search strategy scope by conducting an automatic search over five new databases and also performed the snowballing technique. As a result, besides the 38 studies group found in previous SLM, nine new papers were classified as relevant and rigorously analysed, totalising forty-seven studies. In comparison to previous work, results still point out the recurrence of abstractions in the network infrastructure, highlighting the use of frameworks in one-third of the applications and contributing with an overview of macroprogramming by researchers in different knowledge areas. Keywords: Internet of Things, Wireless Sensor Network, Systematic Mapping, Adaptation, Programming abstraction 1 Introduction The literature has proposed a variety of macroprogramming approaches to mitigate the challenges of making programming more efficient for IoT and WSN applications (Madden et al., 2005; Newton et al., 2007). In this model, high-level programs are written to represent the overall control logic of an IoT deployment, and parts of this high-level program may then be compiled into low-level code and pushed directly into IoT devices. Macroprogramming allows developers to specify a single program which defines the high-level collaboration behaviour for WSNs and IoT applications at the system level, treating the entire network as if it were a “single abstract machine” (Sugihara and Gupta, 2008). Low-level network and device details, such as state maintenance or message transmission, are intentionally hidden from the programmer through an automated translation and deployment of the macroprogram into per-node logic. As a result, macroprogramming, which has already been heavily used in works related to WSN (Newton and Welsh, 2004a; Gummadi et al., 2005; Mottola and Picco, 2011) is emerging as a viable technique for the development of complex and distributed IoT applications, as demonstrated by a number recent efforts (Noor et al., 2019; Hammoudeh et al., 2021a). Despite a large range of proposals for macroprogramming paradigms, there remains a lack of convergence on the right abstractions that users most benefit from across different application domains. In situations such as when a new device is added into the target deployment, for example, an IoT application should therefore be capable of autonomously integrating the new device and assign to it a task that contributes to the overall goal of the application as defined in the macroprogram (Kephart and Chess, 2003; Salehie and Tahvildari, 2009; Alajlan and Elleithy, 2014). Self-adaptation is the ability of a system to reconfigure itself automatically and dynamically in response to changes, by installing, updating and integrating existing software elements with alternative ones at run-time. Recently, researchers have proposed various macroprogramming approaches to mitigate the challenges of making programming more efficient for IoT and WSN applications (Mizzi et al., 2019; Qiao et al., 2018). In this model, high-level programs represent the overall control logic of an IoT deployment, and excerpts of this high-level program may then be compiled into low-level code and pushed directly into IoT devices. There are three significant benefits of this approach. Firstly, it allows the programmer to work at a higher level of abstraction and encode the business logic of an entire deployment in a top-down way, which is often significantly more straightforward than writing code for individual nodes. Secondly, as selected control logic from this high-level model can be deployed into the IoT, data processing can still be performed on IoT devices saving significantly in long-haul communication costs to the cloud. This strategy can also provide lower-latency decision and actuation control where IoT-resident control logic is positioned closer to the data sources on which decisions are being made. Finally, as high-level macro logic will often not specify the fine details of exactly where specific data should come from or where control logic should execute, this offers attractive degrees of freedom in the often dynamic deployment environments of IoT systems – so that the macroprogram translator can perform real-time tuning of which data sources are being used and work around failures and node mobility in the placement of control logic. To establish baselines for comparison with ongoing, recent research results or even identify suitable areas for future research, a systematic literature mapping (SLM) may be of great usefulness (Petersen et al., 2015). An SLM identifies, selects, evaluates, interprets, and summarises relevant studies about a topic, e.g., macroprogramming in IoT/WSN. This paper extends a previous SLM we conducted on how the macroprogramming paradigm has been investigated in IoT and WSN (Santana et al., 2021). In such previous work, the search strategy included automatic search in three sources, namely ACM DL, IEEE Xplore, and Scopus. In this updated SLM, we searched on five new sources of studies and performed the snowballing technique to find more relevant studies. The snowballing technique (Wohlin et al., 2012) allows identifying relevant studies through the scanning of the list of bibliographic references or citations of a paper. Considering this broader search scope, this SLM classified nine new studies as relevant in a total of 47. The contribution is the mapping of the macroprogramming’s state of the art in IoT, identifying the level of adaptations performed, with trends in abstractions applied to the group of nodes. Besides, we show how it has been used in WSN and its increasing shift to IoT research in recent years. This paper is organised as follows: Section 2 overviews the SLM, Section 3 discusses the SLM results, and Section 4 brings our concluding remarks and future work. 2 Materials and Methods The SLM presented in this paper is depicted in Figure 1 and includes three phases: planning, conducting, and publishing. First, a protocol is planned so that one can reproduce it later. It includes research questions, search strategy, search string, sources of studies, and study selection criteria. ![Figure 1. Phases and activities of this SLM’s update.](image) In the conductive phase, studies gathered from search sources are initially selected through studies’ metadata reading and applying inclusion and exclusion criteria previously planned. After, helpful information is extracted from these selected studies that, in turn, can still be excluded using the same selection criteria. As a novelty in this SLM’s update, snowballing is performed by checking the citation list of the resulting papers of the data extraction step. This process, called forward snowballing, finishes when no new study is included. Following the SLM goal, the studies remaining of this whole process constitute the set of relevant papers from which answers for the research questions of the protocol are analysed and synthesised. Finally, the entire protocol and the results of each previous stage are documented as scientific papers or technical reports in the publishing phase. 2.1 Research questions and search terms This SLM’s main goal is to identify primary research investigating macroprogramming with abstractions for WSN and IoT, which must also perform adaptations at the infrastructure level. The following are the research questions (RQ) we elaborated to be answered in this SLM: - **RQ1:** What are the application domains found in primary studies? - **RQ2:** When and where are primary studies published? - **RQ3:** At what levels does adaptation occur, and what are abstraction types in the infrastructure? - **RQ4:** How are adaptations carried out in primary research on WSN and IoT? - **RQ5:** What are the adaptability-related issues found? The next step was to select the proper search terms to identify the most relevant primary studies to answer these research questions. Helped by experts in macroprogramming and IoT, we chose the following set of candidate search terms for the definition of the search string: macroprogramming, macro-programming, declarative approach, imperative approach, programming abstraction, high level, internet of things, cyber physical, cyber-physical, sensor networks, and wireless sensor networks. 2.2 Automatic search After evaluating the trade-off between coverage and relevance of the search results in a pilot search, we opted for the following combination of keywords as the final search string: (macroprogramming OR “macro-programming” OR “declarative approach” OR “imperative approach” OR “programming abstraction”) AND (“high level”) AND (“internet of things” OR “sensor networks”) Specialists in macroprogramming, IoT, and systematic literature research contributed to the search string definition process. In our previous work, we adapted the final search string to the ACM DL, IEEE Xplore, and Scopus’s search engines (Santana et al., 2021). In this SLM’s update, we also performed searches on studies metadata at the Engineering Village, ScienceDirect, Springer Link, Web of Science, and Wiley websites. Finally, it is worth mentioning that we chose the *ACM Guide to Computing Literature* option because it indexes both the full-text collection of ACM publications and other digital databases on Computing. This search option turns the ACM DL into the most comprehensive bibliographic database on Computing. 3 More information can be found at https://libraries.acm.org/digital-library/acm-guide-to-computing-literature. Table 1 details the number of studies retrieved in each source of study. There is a differentiation between the original research and this revisited work: forty-four studies were identified in this extended version (including duplicate documents) after adding five new sources and updating the search results in the three original sources. <table> <thead> <tr> <th>Source</th> <th>Original</th> <th>Extension</th> <th>Difference</th> </tr> </thead> <tbody> <tr> <td>ACM Digital Library</td> <td>16</td> <td>16</td> <td>0</td> </tr> <tr> <td>IEEE Xplore</td> <td>15</td> <td>15</td> <td>0</td> </tr> <tr> <td>Scopus</td> <td>80</td> <td>85</td> <td>5</td> </tr> <tr> <td>Engineering Village</td> <td>-</td> <td>23</td> <td>23</td> </tr> <tr> <td>Science Direct</td> <td>-</td> <td>1</td> <td>1</td> </tr> <tr> <td>Springer Link</td> <td>-</td> <td>5</td> <td>5</td> </tr> <tr> <td>Web of Science</td> <td>-</td> <td>10</td> <td>10</td> </tr> <tr> <td>Wiley</td> <td>-</td> <td>0</td> <td>0</td> </tr> <tr> <td>Total</td> <td>111</td> <td>155</td> <td>44</td> </tr> </tbody> </table> 2.3 Study selection and data extraction We applied the same original selection criteria to the 155 papers returned by the automatic search process (Santana et al., 2021). The exclusion criteria (EC) are: EC1: The paper does not describe primary research. EC2: The document retrieved is not a paper (e.g., preface or summary of journals or conference proceedings). EC3: The full study text is not in English. EC4: The full study text is not accessible. EC5: The paper was not published before 2004. EC6: The paper does not address the IoT or WSN domains. EC7: The paper does not propose, report, or evaluate the usage of adaptation in the context of macroprogramming for programming abstractions. EC8: The paper is a preliminary or short version of another study. A paper is removed from this SLM whenever it meets at least one of the exclusion criteria (EC) presented. Otherwise, the study is categorised based on the only inclusion criteria (IC): “the study reports on the adoption of abstraction in programming and adaptation in infrastructure in IoT and WSN application domains.” As previously presented in Figure 1, study selection occurs on two occasions: after performing the search strategy (with papers’ metadata reading) and during data extraction (with papers’ full-text reading). This strategy significantly reduces the number of non-relevant papers to the SLM. After the automatic search process, we identified and removed 66 duplicate papers (from the 155 studies group) with the support of the Parsif.al tool (available at https://parsif.al). Next, we read the title, summary, and keywords of each of the 89, upon which we applied EC and IC and eliminated 20 papers (see Table 3). As a result, we selected 69 “probably relevant” studies since this selection only relies on the reading and interpretation of papers’ metadata. Next, the data extraction activity requires a form whose fields must be mapped to the research questions in the planning phase. These fields are filled in during the full-text reading of each paper. Table 2 presents the mapping between form fields and research questions. Table 2. Mapping between research questions and data extraction form fields. <table> <thead> <tr> <th>Research question</th> <th>Data extraction form field</th> </tr> </thead> <tbody> <tr> <td>RQ1</td> <td>Knowledge area</td> </tr> <tr> <td>RQ2</td> <td>Application domain</td> </tr> <tr> <td>RQ3</td> <td>Case study</td> </tr> <tr> <td>RQ4</td> <td>Publication vehicle</td> </tr> <tr> <td>RQ5</td> <td>Publication year</td> </tr> <tr> <td>RQ6</td> <td>Adaptation level</td> </tr> <tr> <td>RQ7</td> <td>Abstraction type</td> </tr> <tr> <td>RQ8</td> <td>Proposal</td> </tr> <tr> <td>RQ9</td> <td>Experimental validation</td> </tr> <tr> <td>RQ10</td> <td>Limitations</td> </tr> <tr> <td>RQ11</td> <td>Future work</td> </tr> </tbody> </table> The data extraction activity eliminated 29 papers more, totalling 49 excluded papers. As described in Table 3, the EC7 criterion excluded most. It means that only full-text reading allowed us to eliminate papers not focusing on adaptation and macroprogramming in IoT or WSN. As a result of the data extraction activity, 40 papers are relevant considering the SLM goal. Thus, the automatic search found only two new studies in comparison with our previous work, which identified thirty-eight. 2.4 Snowballing Besides automatic search, our search strategy includes forward snowballing (FSB) (Wohlin et al., 2012) as an attempt to obtain other relevant studies using the forty studies group as input. In this SLM, the citation list of each paper was retrieved from the Scopus search engine. In the first round of FSB, we identified 535 papers, from which 43 were duplicates considering the one-hundred-fifty-five initial studies group. The set of EC rejected 485 studies after metadata and full-text reading. As seven papers remained, a second-round of FSB was performed. Forty-five studies cited these seven papers. However, nine of them were duplicates, and EC rejected the remaining. As no new paper was identified, the snowballing procedure ended up identifying 580 studies, but only seven (from the first-round) relevant to this SLM. Table 3. The number of studies excluded by exclusion criteria. <table> <thead> <tr> <th>Activity</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Automatic search</td> <td>4</td> <td>8</td> <td>0</td> <td>0</td> <td>0</td> <td>5</td> <td>3</td> <td>0</td> <td>20</td> </tr> <tr> <td>Data extraction</td> <td>0</td> <td>6</td> <td>0</td> <td>0</td> <td>0</td> <td>20</td> <td>1</td> <td>29</td> <td></td> </tr> <tr> <td>Snowballing</td> <td>54</td> <td>28</td> <td>1</td> <td>2</td> <td>0</td> <td>7</td> <td>425</td> <td>4</td> <td>521</td> </tr> <tr> <td>Total</td> <td>58</td> <td>42</td> <td>1</td> <td>4</td> <td>0</td> <td>12</td> <td>448</td> <td>5</td> <td>570</td> </tr> </tbody> </table> 2Search carried out on July 14, 2020. 3Search update carried out on July 31, 2021. Therefore, besides the 38 studies found in the original version of this SLM, this updated version retrieved nine new relevant studies: two by the automatic search and seven by the FSB procedure. The full list containing the forty-seven relevant papers is in Table 4. From now on, we identify them as S1 to S47 (S for study). By analyzing the source of these relevant studies, we concluded that 98% of them came from Scopus. In other words, Scopus indexes most of the publication venues whose papers investigate abstraction, macroprogramming, and adaptation in infrastructures in IoT or WSN. Further information about these is also available elsewhere. Finally, Figure 2 depicts the entire selection process with the respective number of primary studies chosen and removed in each activity of the conduction phase. Besides, data extracted from each relevant study is also available. 3 Results and Discussion This section presents the analysis and synthesis of data extracted from the 47 studies to answer the SLM’s research questions. 3.1 About research question 1 To answer RQ1, “What are the application domains found in primary studies?”, we found out that 72% of the papers (34 out of 47) focused exclusively on WSN, being the area with the highest number of publications. The remaining papers’ subjects are IoT (7) or both IoT and WSN (6). Figure 3 presents the distribution of papers per publication year. In 2015, the first IoT-oriented papers came out, and the number of such papers has increased since then. This IoT research’s growth is confirmed by the literature (Greer et al., 2019). As WSN is one of the IoT enabling technologies, it may explain the decreasing number of macroprogramming research in WSN favoring IoT. As depicted in Figure 4, almost 80% of the studies (37 of 47) investigated macroprogramming concepts and practices in real-world case studies. In total, those studies cover sixteen application domains, such as intelligent environments and monitoring. However, no case study was reported in papers published in 2004 and 2015. Besides, we represented in the Others category those papers whose application domain was not explicit. The smart application domain seems to be a trend since 2018, including smart homes, buildings, grids, and transportation. Moreover, all these scenarios may converge to smart cities, representing a more complex picture for adopting macroprogramming abstractions. To summarise, the answer to RQ1 is roughly the same as this SLM’s previous version (Santana et al., 2021): most of the macroprogramming studies in WSN with an increasing focus shift to IoT since 2015, and a diversity of application domains with an apparent inclination to smart environments in the last years. ### Table 4. The forty-seven papers analysed on this SLM. <table> <thead> <tr> <th>ID</th> <th>Paper</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>S1</td> <td>A component-based approach for service distribution in Sensor Networks</td> <td>(Taberkordi et al., 2010)</td> </tr> <tr> <td>S2</td> <td>A constraint programming approach for managing end-to-end requirements in sensor network macroprogramming</td> <td>(Hassani Bijarboonehs et al., 2014)</td> </tr> <tr> <td>S3</td> <td>A library for developing real-time and embedded applications in C</td> <td>(Basanta-Val and García-Valles, 2015)</td> </tr> <tr> <td>S4</td> <td>A service-oriented approach to facilitate WSAN application development</td> <td>(Cafète et al., 2011)</td> </tr> <tr> <td>S5</td> <td>A service-oriented middleware for wireless sensor and actor networks</td> <td>(Cafète et al., 2009)</td> </tr> <tr> <td>S6</td> <td>A state-based programming model and system for wireless sensor networks</td> <td>(Bischoff and Kortuem, 2007)</td> </tr> <tr> <td>S7</td> <td>Adaptive dynamic checkpointing for safe efficient intermittent computing</td> <td>(Maeng and Lucia, 2018)</td> </tr> <tr> <td>S8</td> <td>Adaptive teams of autonomous aerial and ground Robots for situational awareness</td> <td>(Hsieh et al., 2007)</td> </tr> <tr> <td>S9</td> <td>Adaptive Wireless Networks as an Example of Declarative Fractionated Systems</td> <td>(Choi et al., 2014)</td> </tr> <tr> <td>S10</td> <td>An easy-to-use 3D visualization system for planning context-aware applications in smart buildings</td> <td>(Su and Huang, 2014)</td> </tr> <tr> <td>S11</td> <td>An overview of the VigiNet architecture</td> <td>(He et al., 2005)</td> </tr> <tr> <td>S12</td> <td>D’Artagnan: An Embedded DSL Framework for Distributed Embedded Systems</td> <td>(Mizzi et al., 2018)</td> </tr> <tr> <td>S13</td> <td>Deductive Approach to Processing High-Level Video Activity Queries in UAV Networks</td> <td>(Gupta, 2018)</td> </tr> <tr> <td>S14</td> <td>Defining Services and Service Orchestration Acting on Shared Sensors and Actuators</td> <td>(Bouali Baghli et al., 2018)</td> </tr> <tr> <td>S15</td> <td>Design and compilation of an object-oriented macroprogramming language for wireless sensor network</td> <td>(Oppermann et al., 2014)</td> </tr> <tr> <td>S16</td> <td>Developing wireless sensor network applications based on a function block programming abstraction</td> <td>(Kerasiotis et al., 2012)</td> </tr> <tr> <td>S17</td> <td>EcoCast: Interactive, object-oriented macroprogramming for networks of ultra-compact wireless sensor nodes</td> <td>(Tu et al., 2011)</td> </tr> <tr> <td>S18</td> <td>Efficient configuration and control of SANETs using FACTS</td> <td>(Terfloth and Schiller, 2008)</td> </tr> <tr> <td>S19</td> <td>Efficient routing from multiple sources to multiple sinks in wireless sensor networks</td> <td>(Ciciriello et al., 2007)</td> </tr> <tr> <td>S20</td> <td>Energy-efficient task mapping for data-driven sensor network macroprogramming</td> <td>(Pathak and Prasanna, 2010)</td> </tr> <tr> <td>S21</td> <td>Logical neighborhoods: A programming abstraction for wireless sensor networks</td> <td>(Mottola and Picco, 2006)</td> </tr> <tr> <td>S22</td> <td>Macro programming a spatial computer with bayesian networks</td> <td>(Mamei, 2011)</td> </tr> <tr> <td>S23</td> <td>MBMF: A framework for macroprogramming data-centric sensor network applications using the Bird-Meertens formalism</td> <td>(Loke and Nadarajah, 2009)</td> </tr> <tr> <td>S24</td> <td>Nano-CF: A coordination framework for macro-programming in Wireless Sensor Networks</td> <td>(Gupta et al., 2011)</td> </tr> <tr> <td>S25</td> <td>PICO-MP: Decentralised macro-programming for wireless sensor and actuator networks</td> <td>(Dulay et al., 2018)</td> </tr> <tr> <td>S26</td> <td>A platform independent communications middleware for heterogeneous devices in smart grids</td> <td>(Chen et al., 2019)</td> </tr> <tr> <td>S27</td> <td>ProFaNTG: Assist for programming and managing performance-aware sensor network application</td> <td>(Elots et al., 2015)</td> </tr> <tr> <td>S28</td> <td>Programming iMote networks made easy</td> <td>(Bauderon et al., 2010)</td> </tr> <tr> <td>S29</td> <td>Intelligent IoT Systems with a Python-based Declarative Tool</td> <td>(D’Urso et al., 2019)</td> </tr> <tr> <td>S30</td> <td>Programming the smart home</td> <td>(Bischoff et al., 2007)</td> </tr> <tr> <td>S31</td> <td>PS-QUASAR: A publish/subscribe QoS aware middleware for Wireless Sensor and Actor Networks</td> <td>(Chen et al., 2013)</td> </tr> <tr> <td>S32</td> <td>Region streams: Functional macroprogramming for sensor networks</td> <td>(Newton and Walsh, 2004b)</td> </tr> <tr> <td>S33</td> <td>The omni macroprogramming environment for sensor networks</td> <td>(Awan et al., 2006)</td> </tr> <tr> <td>S34</td> <td>TinyReef: A register-based virtual machine for wireless sensor networks</td> <td>(Marques et al., 2009)</td> </tr> <tr> <td>S35</td> <td>Transaction: Where transactions meet the physical world</td> <td>(Sengupta et al., 2019)</td> </tr> <tr> <td>S36</td> <td>UBIQUEST, For Rapid Prototyping of Networking Applications</td> <td>(Ahmad-Kasem et al., 2012)</td> </tr> <tr> <td>S37</td> <td>USEME: A service-oriented framework for wireless sensor and actor networks</td> <td>(Cafète et al., 2008)</td> </tr> <tr> <td>S38</td> <td>μsETL: A set based programming abstraction for wireless sensor networks</td> <td>(Hossain et al., 2011)</td> </tr> <tr> <td>S39</td> <td>A Service-Oriented Approach for Sensing in the Internet of Things: Intelligent Transportation Systems and Privacy Use Cases</td> <td>(Hammoud et al., 2021b)</td> </tr> <tr> <td>S40</td> <td>ACAOT: A Framework for Adaptable Context-Aware IoT applications</td> <td>(ElKady et al., 2020a)</td> </tr> <tr> <td>S41</td> <td>A modular and extensible macroprogramming compiler</td> <td>(Hnat et al., 2010)</td> </tr> <tr> <td>S42</td> <td>A Resource-Oriented Programming Framework Supporting Runtime Propagation of RESTful Resources</td> <td>(Qiu et al., 2014)</td> </tr> <tr> <td>S43</td> <td>Enabling Scope-Based Interactions in Sensor Network Macroprogramming</td> <td>(Mottola et al., 2007)</td> </tr> <tr> <td>S44</td> <td>Hybrid Macroprogramming Wireless Networks of Embedded Systems with Declarative Naming</td> <td>(Intanagonwiwat, 2012)</td> </tr> <tr> <td>S45</td> <td>makeSense: Simplifying the Integration of Wireless Sensor Networks into Business</td> <td>(Mottola et al., 2019)</td> </tr> <tr> <td>S46</td> <td>Role-based automatic programming framework for interworking a drone and wireless sensor networks</td> <td>(Min et al., 2018)</td> </tr> <tr> <td>S47</td> <td>snBench: Programming and virtualization framework for distributed multitasking sensor networks</td> <td>(Ocean et al., 2006)</td> </tr> </tbody> </table> ### 3.2 About research question 2 To answer RQ2, "When and where are primary studies published?", we observed that 25% of the studies (12 of 47) about macroprogramming in IoT/WSN were published from 2018. Following our protocol, there was no paper about that subject in 2016 and 2017. Concerning publication venues, conferences and workshops cover 72% (34 of 47) of the accepted papers (see Figure 5). Besides, from 45 distinct publication venues, only two published two papers each: the ACM/IEEE International Conference on Information Processing in Sensor Networks and the IEEE Conference on Local Computer Networks. In brief, the answer to RQ2 is similar to the one described in Santana et al. (2021): a significant number of studies (25%) about macroprogramming in IoT/WSN during the last four years and a heterogeneous collection of publications venues. These results suggest an increasing interest in research on macroprogramming for IoT/WSN in recent years. Besides, the community has a great list of options to publish their research on that subject. ### 3.3 About research question 3 The RQ3 investigates “At what levels does adaptation occur, and what are abstraction types in the infrastructure?”. Concerning adaptation level, we followed the Krupitzer’s taxonomy that describes five levels of adaptation, as depicted in Figure 6: application (individual or a set of applications), software systems (middleware or operating system), communication (network infrastructure or communication patterns), context, and technical resource (Krupitzer et al., 2015). The most investigated adaptation levels are, in this sequence, communication in the network infrastructure (25.5%), context (21.3%), and (ensemble of) applications (19.1%), represented by 12, 11, and 9 studies of the 47 accepted papers (see Figure 7). The most investigated adaptation levels are, in this sequence, communication in the network infrastructure (25.5%), context (21.3%), and (ensemble of) applications (19.1%), represented by 12, 11, and 9 studies of the 47 accepted papers. Considering the adaptation level and the knowledge area of each study in Figure 7, a deeper analysis reveals that communication in the network infrastructure is studied most (11), followed by context (8) and communication pattern (6). Besides, adaptation is more frequent at the middleware (3) and the application (2) levels in IoT-oriented papers. It may be explained because middleware is helpful in situations with often resource-constrained IoT devices. Besides, there is no study examining adaptation at a single application level. Regarding abstraction type, we used Motolla’s work that classifies it as nodes, groups, and systems (Mottola, 2008). At the node level, macroprogramming abstractions alter indi- --- **Figure 4.** Application domain per publication year. **Figure 5.** Publication venue per publication year. **Figure 6.** A taxonomy for adaptation level (Krupitzer et al., 2015). **Figure 7.** Adaptation levels and knowledge area. vidual nodes’ states. At the group level, such modifications occur in a group of nodes. Finally, macroprogramming instructions spread over the network at the system level. As shown in Figure 8, the group adaptation type is present in almost half of the studies (23 of 47)—we suppose the flexibility of subdividing a sensor network into smaller groups with common characteristics may explain this high percentage. Next, we crossed adaptation levels and abstraction types from the 47 accepted papers. Results reveal that context and communication in the network infrastructure are most present at the system and group levels, respectively. On the other hand, there is a more balanced distribution between adaptation levels at the node abstraction level. ![Figure 8. Adaptation level in relation to abstraction classification.](image) The answer to RQ3 somewhat differs from the one presented in Santana et al. (2021). Communication in the network infrastructure remains the most investigated adaptation level; the same applies to the studies examining groups of nodes as abstraction type. However, this SLM’s update shows that the number of studies about nodes as abstraction succeeds the number of studies about the system abstraction. ### 3.4 About research question 4 To identify “How are adaptations carried out in primary research on WSN and IoT?”, we found out twelve different ways of implementing adaptation using macroprogramming in IoT/WSN, as depicted in Figure 9. Software frameworks are present in more than one-third of the studies (17 of 47). Frameworks hide low-level details of designers’ and programmers’ tasks, automate part of these tasks, and ease software development. We believe these assumptions explain the high number of studies implementing adaptation demands in a software framework. Other implementations of the adaptation requirement include programming languages, middleware, and systems (six studies each). We also classified the studies under the validation point of view: implementation, prototype, simulation, and testbed. Approximately two-thirds of the studies (30 of 47) validated their research proposals through implementation. On the other hand, simulations were performed in ten studies, and the testbed was the less frequent validation type (only 2 of 47). As shown in Figure 9, implementation was also the most used validation type among the four most employed adaptation proposals (i.e., framework, programming languages, middleware, and system). Thus, a software framework is the most frequent adaptation implementation, as also described in Santana et al. (2021). However, this SLM’s update describes more studies employing programming language, middleware, and the system as adaptation implementations. ### 3.5 About research question 5 To answer “What are the adaptability-related issues found?” we identified problems, limitations, and future work proposals in each accepted paper. This SLM revealed 25 distinct issues: communication, network topology, network traffic, context-awareness, coordination, among others. Communication was the highest cited issue (4 of 47), even in IoT-oriented studies. Observing the knowledge area (Figure 10), in IoT, the communication limitations were present in two papers. That represented the majority. In WSN, however, concentrated middleware and studies in development (two papers each). One of the key results of our study is that very few research papers examine the opportunities for adaptation of a deployment guided by a high-level macroprogram. This is a key opportunity that we seek to exploit in our future research—building on the challenges in RQ5, we aim to develop formal approaches to continually guide a deployed system towards a more optimal form according to its current deployment environment conditions, while using a high-level macroprogram to ensure that the deployed system remains within an envelope of behaviour expected by the system designer. Of these, 37 have different directions, and the others converge on the following themes, which show the target problems that researchers aim to solve using macroprogramming. This provides insight into challenges researchers view as being particularly suited to a macroprogramming-based solution. Overall, the dominant target problems across the study period are energy efficiency, aiming to extend the lifetime of deployed infrastructures, and scalability, given the large sizes typical of most deployments. We also note that scalability became the dominant target problem in the three latest years of the period comprised by our study. Besides energy efficiency and scalability, other target problems that have received significant interest include device location, collaboration, fault resilience, and time synchronization. Finally, the answer to QP5 showed a large number of different types of limitations, as well as trends for future work, and it was not possible to identify any particular kind of trend. Similar to previous work (Santana et al., 2021), communication had the most significant number of limitations, with 4 studies, most of them in IoT. 3.6 Results synthesis Figure 11 illustrates a bubble graph synthesizing the most relevant information we extracted and analysed from the accepted papers in this SLM. Three axes of information compose that bubble chart: adaptation level, abstraction type, and application area. The bubble size represents the number of studies that investigate the intersection of each two axes. Notice the high concentration of macroprogramming research involving the communication adaptation level in WSN-oriented work. Besides, observe the number of studies in which modifications caused by macroprogramming abstractions disseminate in a group of nodes. Finally, there was no study in which adaptations take place in a single application. This finding confirms that macroprogramming should not be tackled at IoT/WSN isolated components. 4 Conclusions and future work Overall, we posit that macroprogramming remains a topic of significant interest and a natural approach for IoT systems. Because these systems are often composed of a large number of devices controlled by a single organization, and because these devices are typically heterogeneous and relatively difficult to program in themselves, it is highly desirable to gain high-level abstractions to program the entire system. We draw on the main results of our study to present a discussion of the challenges and opportunities for future research on macroprogramming for WSNs and IoT: Converging on the right paradigms: our study revealed various macroprogramming paradigms for different applications and problems. For example, many existing programming abstractions for WSN and IoT provide a specification of actions performed by individual de- sives or instead allow one to program the network and customize the underlying run-time, which is often dynamic. However, we did not observe any notable convergence on accepted macroprogramming paradigms in general or for specific applications/challenges. Among the notable exceptions observed in the papers, ACAIOT (Adaptive Context-Aware IoT applications) was the proposed framework by comparing its architecture with recent research studies (ElKady et al., 2020b). Also, we use ACAIOT to implement smart home application services by using a real dataset. Embracing the dynamic nature of the environment: the devices and services of an IoT deployment can change frequently and vary their availability at any given time. This can make it challenging for developers to define applications that seamlessly persist across this volatility to offer a continuous level of service. Macroprogramming appears to provide a straightforward solution to this problem, in that the overall business or scientific logic of an application can be defined separately from specific devices, with the deployment of a macroprogram able to adjust autonomously to the currently available resources. Variable distribution of logic: as IoT deployments envision each device becoming a uniquely addressable Internet endpoint, and with the prevalence of cheap cloud computing, there is an inclination to use IoT nodes as non-intelligent data endpoints or actuation endpoints, with all business logic placed on cloud services which collect data from all nodes and make decisions based on that data. However, this architecture requires significant network capacity to get all data into the cloud and denies opportunities to perform at least some processing within the network. Macroprogramming offers a potential chance to automate the distribution of logic both within cloud services and within the IoT network itself, with automated macroprogram deployment tool chains able to decide which logic is best suited for which location based on available processing, network, and energy capabilities. One of the challenges relates to the degrees of freedom offered by macroprograms: as the system description is inherently high-level, the operationalisation of macroprograms has significant freedom in how they are deployed over time – including the placement of logic and the adaptation to fluctuations in the environment and resources. Let’s take this opportunity to its extreme. We could envision a macroprogram acting as a specification of what the system is designed to do in an ideal scenario and an envelope of acceptable ways to implement that functionality. A smart deployment manager could then take that idealized specification and intelligently map it onto the available resources continuously, reporting how close the actual deployment is to the idealized specification of the microprogram. This work contributes to the IoT and WSN fields, with the results of a systematic literature mapping (SLM). This SLM brings important work and reporting aspects as an alternative to propose the programming of devices at a high level, mainly with the growth of networks in the volume of data (high number of sensors) and device heterogeneity. Finally, SLM aims to categorize the main findings of primary research about a topic and to benefit researchers in establishing baselines for other research activities. SLM is an open form of what the literature calls a systematic literature review (SLR), i.e., a deeper analysis and comparison of a collection of studies. As such, future work may consist of the conduction of an SLR on macroprogramming in IoT, focusing on those papers exploring multiple adaptation levels in groups of network nodes (see Figure 11). It is common practice to perform an SLR on pieces of evidence found in an SLM. Results of an SLR can be used to understand the efficacy and efficiency of a method or technology or the strengths and weaknesses of methods and technologies under certain circumstances. As study quality assessment is a widely deployed technique in SLMs and SLRs, we can also elaborate on a set of quality criteria to evaluate the different contributions of each of the 47 papers selected. --- **Figure 11.** A bubble chart describing the mapping among adaptation level, abstraction type, and application area. Acknowledgements This work was partly funded by the Royal Society – Newton Mobility Grant NMG-R2-170105. This study was financed in part by the CAPES - Brazil. This research is also part of the INCT of the Future Internet for Smart Cities funded by CNPq proc.465446/2014-0, CAPES proc.88887.136422/2017-00, and FAPESP proc.14/50937-1 and 15/24485-9. References The state of the art of macroprogramming in IoT: An update Sene Jr et al. 2022 Mottola, L. (2008). Programming wireless sensor net- works: from physical to logical neighborhoods. Available at: https://mottola.faculty.polimi.it/theses/mottola08programming.pdf.
{"Source-Url": "https://sol.sbc.org.br/journals/index.php/jisa/article/download/2372/2110", "len_cl100k_base": 9432, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39727, "total-output-tokens": 15375, "length": "2e13", "weborganizer": {"__label__adult": 0.0004012584686279297, "__label__art_design": 0.0004260540008544922, "__label__crime_law": 0.0003666877746582031, "__label__education_jobs": 0.001865386962890625, "__label__entertainment": 0.0001239776611328125, "__label__fashion_beauty": 0.00022530555725097656, "__label__finance_business": 0.0006556510925292969, "__label__food_dining": 0.00041031837463378906, "__label__games": 0.0006999969482421875, "__label__hardware": 0.00235748291015625, "__label__health": 0.00086212158203125, "__label__history": 0.0005927085876464844, "__label__home_hobbies": 0.0001653432846069336, "__label__industrial": 0.000766754150390625, "__label__literature": 0.0005025863647460938, "__label__politics": 0.00045180320739746094, "__label__religion": 0.00055694580078125, "__label__science_tech": 0.298583984375, "__label__social_life": 0.00014352798461914062, "__label__software": 0.0122833251953125, "__label__software_dev": 0.67578125, "__label__sports_fitness": 0.0003600120544433594, "__label__transportation": 0.001129150390625, "__label__travel": 0.00027060508728027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56238, 0.05203]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56238, 0.32418]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56238, 0.84862]], "google_gemma-3-12b-it_contains_pii": [[0, 5462, false], [5462, 10424, null], [10424, 16272, null], [16272, 19013, null], [19013, 26538, null], [26538, 28938, null], [28938, 34066, null], [34066, 35761, null], [35761, 40086, null], [40086, 45755, null], [45755, 50375, null], [50375, 56238, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5462, true], [5462, 10424, null], [10424, 16272, null], [16272, 19013, null], [19013, 26538, null], [26538, 28938, null], [28938, 34066, null], [34066, 35761, null], [35761, 40086, null], [40086, 45755, null], [45755, 50375, null], [50375, 56238, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56238, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56238, null]], "pdf_page_numbers": [[0, 5462, 1], [5462, 10424, 2], [10424, 16272, 3], [16272, 19013, 4], [19013, 26538, 5], [26538, 28938, 6], [28938, 34066, 7], [34066, 35761, 8], [35761, 40086, 9], [40086, 45755, 10], [45755, 50375, 11], [50375, 56238, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56238, 0.29924]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b2248781e962922d9ae37e088c53ea9efb89b2f7
[REMOVED]
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02101799/document", "len_cl100k_base": 9410, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 40108, "total-output-tokens": 10937, "length": "2e13", "weborganizer": {"__label__adult": 0.0003781318664550781, "__label__art_design": 0.00040078163146972656, "__label__crime_law": 0.00037217140197753906, "__label__education_jobs": 0.0006327629089355469, "__label__entertainment": 9.512901306152344e-05, "__label__fashion_beauty": 0.00018477439880371096, "__label__finance_business": 0.00030732154846191406, "__label__food_dining": 0.00043320655822753906, "__label__games": 0.000667572021484375, "__label__hardware": 0.0026416778564453125, "__label__health": 0.0006551742553710938, "__label__history": 0.0003619194030761719, "__label__home_hobbies": 0.00013387203216552734, "__label__industrial": 0.0010271072387695312, "__label__literature": 0.0002301931381225586, "__label__politics": 0.00032067298889160156, "__label__religion": 0.0006680488586425781, "__label__science_tech": 0.1683349609375, "__label__social_life": 8.887052536010742e-05, "__label__software": 0.0110626220703125, "__label__software_dev": 0.8095703125, "__label__sports_fitness": 0.00043392181396484375, "__label__transportation": 0.000881195068359375, "__label__travel": 0.000274658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32647, 0.09105]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32647, 0.439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32647, 0.76479]], "google_gemma-3-12b-it_contains_pii": [[0, 972, false], [972, 1112, null], [1112, 3309, null], [3309, 5373, null], [5373, 7986, null], [7986, 11131, null], [11131, 13819, null], [13819, 15255, null], [15255, 17057, null], [17057, 18966, null], [18966, 20982, null], [20982, 23314, null], [23314, 25582, null], [25582, 27220, null], [27220, 30231, null], [30231, 32647, null]], "google_gemma-3-12b-it_is_public_document": [[0, 972, true], [972, 1112, null], [1112, 3309, null], [3309, 5373, null], [5373, 7986, null], [7986, 11131, null], [11131, 13819, null], [13819, 15255, null], [15255, 17057, null], [17057, 18966, null], [18966, 20982, null], [20982, 23314, null], [23314, 25582, null], [25582, 27220, null], [27220, 30231, null], [30231, 32647, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32647, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32647, null]], "pdf_page_numbers": [[0, 972, 1], [972, 1112, 2], [1112, 3309, 3], [3309, 5373, 4], [5373, 7986, 5], [7986, 11131, 6], [11131, 13819, 7], [13819, 15255, 8], [15255, 17057, 9], [17057, 18966, 10], [18966, 20982, 11], [20982, 23314, 12], [23314, 25582, 13], [25582, 27220, 14], [27220, 30231, 15], [30231, 32647, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32647, 0.11053]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7454582c488b65e81954d189d8755b7add5083cf
MORBIG: A Static Parser for POSIX Shell Yann Régis-Gianas IRIF, Université Paris-Diderot, CNRS, INRIA PLR2, Paris, France Nicolas Jeannerod IRIF, Université Paris-Diderot, CNRS École normale supérieure, Paris, France Ralf Treinen IRIF, Université Paris-Diderot, CNRS Paris, France Abstract The POSIX shell language defies conventional wisdom of compiler construction on several levels: The shell language was not designed for static parsing, but with an intertwining of syntactic analysis and execution by expansion in mind. Token recognition cannot be specified by regular expressions, lexical analysis depends on the parsing context and the evaluation context, and the shell grammar given in the specification is ambiguous. Besides, the unorthodox design choices of the shell language fit badly in the usual specification languages used to describe other programming languages. This makes the standard usage of Lex and Yacc as a pipeline inadequate for the implementation of a parser for POSIX shell. The existing implementations of shell parsers are complex and use low-level character-level parsing code which is difficult to relate to the POSIX specification. We find it hard to trust such parsers, especially when using them for writing automatic verification tools for shell scripts. This paper offers an overview of the technical difficulties related to the syntactic analysis of the POSIX shell language. It also describes how we have resolved these difficulties using advanced parsing techniques (namely speculative parsing, parser state introspection, context-dependent lexical analysis and longest-prefix parsing) while keeping the implementation at a sufficiently high level of abstraction so that experts can check that the POSIX standard is respected. The resulting tool, called MORBIG, is an open-source static parser for a well-defined and realistic subset of the POSIX shell language. CCS Concepts • Software and its engineering → Parsers; Keywords Parsing, POSIX shell, functional programming 1 Introduction Scripts are everywhere on UNIX machines, and many of them are written in POSIX shell. The POSIX shell is a central piece in the toolbox of a system administrator who may use it to write scripts that perform all kinds of repetitive administration tasks. Furthermore, scripts are used in a systematic way by GNU/Linux distributions for specific tasks, like for writing cron jobs which are regularly executed, init scripts (depending on the init system) that start or stop services, or scripts which are executed as part of the process of installing, removing or upgrading software packages. The Debian GNU/Linux distribution, for instance, contains 31,832\textsuperscript{1} of these so-called maintainer scripts, 31,521 of which are written in POSIX shell. These scripts are often executed with root privileges since they have to act on the global system installation, for instance when installing software packages. As a consequence, erroneous scripts can wreak havoc on a system, and there is indeed a history of disastrous shell scripts (one of the authors of this paper takes the blame for one of these). An ongoing research project\textsuperscript{2} aims at using formal verification tools for analyzing shell scripts. The first step when statically analyzing shell scripts is to analyze their syntactic structure, and to produce a syntax tree. This seems at first sight an easy task: after all, the POSIX standard contains a grammar, so one might think that a parser can be thrown together in a day or so, reusing what one has learned in an introductory course on compiler construction. The reality is far from that! It starts with the fact that the POSIX shell language was never designed for being statically analyzed. In fact, the shell analyses pieces of syntax of a script, on the fly, in a process that is intertwined with an evaluation mechanism called expansion. But this is only the start, the syntax of POSIX shell is full of pitfalls which we will explain in detail in the next section, and which make it surprisingly difficult to write a parser for POSIX shell. For this reason, existing implementations of shell interpreters contain hand-crafted syntactic analyzers that are very hard to understand. Due to the way the shell semantics is defined, they do not construct a complete syntax tree, but produce pieces of syntax on the fly. We could probably have taken one of these implementations and tweaked it into constructing a complete syntax tree. The problem is, how can we trust such a parser? The parser is an essential part of our \textsuperscript{1}unstable, amd64 architecture, as of 29/11/2016 \textsuperscript{2}CoLiS, “Correctness of Linux Scripts”, https://colis.irif.fr tool chain, if the parser produces incorrect syntax trees then all formal analysis based on it will be worthless. The standard techniques to implement syntactic analyzers are based on code generators. Using code generators is an excellent software engineering practice which allows us to write high-level and easily maintainable code. These tools take as input high-level formal descriptions of the lexical conventions and of the grammar and produce low-level efficient code using well-understood computational devices (typically finite-state transducers for lexical analysis, and pushdown automata for parsing). This standard approach is trustworthy because (i) the high-level descriptions of the lexical conventions and grammar are usually close to their counterparts in the specification; (ii) the code generators are based on well-known algorithms like LR-parsing which have been studied for almost fifty years[16]. The problem with this approach is that the standard LEX-Yacc pipeline is inadequate for POSIX shell, as we will argue in the next section. Despite the pitfalls of the shell language, we nonetheless managed to maintain an important part of generated code in our implementation, described in Section 3. To sum things up, we claim the following contributions: (i) This paper provides an overview of the difficulties related to the syntactic analysis of the POSIX shell language as well as a list of technical requirements that are, in our opinion, needed to implement a static parser for this language. (ii) This paper describes a modular architecture that arguably simplifies code review, especially because it follows the POSIX specification decomposition into token recognition and syntactic analysis, and because it embeds the official BNF grammar, which makes more explicit the mapping between the specification and the implementation. (iii) This paper is finally a demonstration that an LR(1) parser equipped with a purely functional and incremental interface is a lightweight solution to realize the advanced parsing techniques required by POSIX shell parsing, namely speculative and reentrant parsing, longest-match parsing as well as parsing-dependent “negatively specified” lexing. 2 The perils of POSIX shell The POSIX Shell Command Language is specified by the Open Group and IEEE in the volume “Shell & Utilities” of the POSIX standard. Our implementation is based on the latest published draft of this standard [14]. This standardization effort synthesizes the common concepts and mechanisms that can be found in the most common implementations of shell interpreters like bash or dash. Unfortunately, as said in the introduction, it is really hard to extract a high-level declarative specification out of these existing implementations because the shell language is inherently irregular, and because its unorthodox design choices fit badly in the usual specification languages used by other programming language standards. Syntactic analysis is most often decomposed into two distinct phases: (i) lexical analysis, which synthesizes a stream of lexemes from a stream of input characters by recognizing lexemes as meaningful character subsequences and by ignoring insignificant character subsequences such as layout; (ii) parsing which synthesizes a parse tree from the stream of tokens according to some formal grammar. In this section, we describe several aspects which make the shell language hard (and actually impossible in general) to parse using the standard decomposition described above, and more generally using the standard parsing tools and techniques. These difficulties not only raise a challenge in terms of programming but also in terms of reliability. 2.1 Non standard lexical conventions 2.1.1 Token recognition In usual programming languages, most of the categories of tokens are specified by means of regular expressions. As explained earlier, lexer generators such as LEX conveniently turn such high-level specifications into efficient finite state transducers, which makes the resulting implementation both reliable and efficient. The token recognition process for the shell language is described in Section 2.3 of the specification [13], unfortunately without using any regular expressions. While other languages use regular expressions with a longest-match strategy to recognize the next lexeme in the input, the specification of the shell language is formulated in a “negative way”. Indeed, token recognition is based on a state machine which explains instead how tokens must be delimited in the input and how these delimited chunks must be classified into two categories: words and operators. The state machine which recognizes the tokens is unfortunately not a regular finite state machine. It is almost as powerful as a pushdown automaton since it must be able to recognize nested quotations like the ones found in the following example. Example 2.1 (Quotations). Consider the following input: 1 BAR='foo' 'ba' r 2 X=0 echo x$BAR "$(echo $(date))" By the lexical conventions of most programming languages, the first line would be decomposed as five distinct tokens (namely BAR, =, ’foo’, “ba” and r). On the contrary, the lexical conventions of the shell language considers the entire line BAR=’foo’ ‘ba’ r as a single token, classified into the category of words. On the second line, the input is split into the tokens X=0, echo and x$BAR "$(echo $(date))". Notice that the third token contains nested quotations of the form $(..$(..)) the recognition of which is out of the scope of regular finite state machines (without a stack). 2.1.2 Layout The shell language also has some unconventional lexical conventions regarding the interpretation of newline characters. Usually, newline characters are simply ignored by the lexing phase since they only serve as delimiters between tokens. In shell, however, newline characters are meaningful, and there are even four different interpretations of a newline depending on the parsing context. Therefore, most of the newline characters (but not all, as we shall see in the next example) must be transmitted to the parser. Hence, one step of token recognition may produce several tokens: the delimited token and a potential delimiter that must also be transmitted to the parser. Again, this is not common practice since, usually, lexical scanners produce at most one token every time they are invoked. Example 2.2 (Interpretations of newline characters). The four interpretations of the newline characters occur in the following example: ```bash $ for i in 0 1 ``` On line 1, the newline character has a syntactic meaning because it acts as a marker for the end of the sequence over which the `for`-loop is iterating. On line 2, the newline character at the end of the comment must not be ignored but is merged with the newline character of the previous line. On line 3, the newline character is preceded by a backslash. This sequence of characters is interpreted as a line-continuation, which must be handled at the lexing level. That is, in this case the newline is actually interpreted as layout. On lines 4 and 5, each of the final newlines terminates a command. The recognition of comments of shell scripts are also non-conventional. Even though the specification rule regarding comments seems quite standard: ```bash # Some interesting numbers > + $i ``` the fact that `#` is not a delimiter allows a word to contain the character `#`, as in the following example. Example 2.3. ```bash ls foo#bar ``` In that example, `foo#bar` is recognized as a single word. 2.1.3 Delimiting subshell invocations From the lexical point of view, a subshell invocation is simply a word. Delimiting these subshell invocations is hardly reducible to regular expression matching. Indeed, to determine the end of a subshell invocation, it is necessary to recursively call the shell command parser so that it consumes the rest of the input until a complete command is parsed. Example 2.4 (Finding closing parenthesis requires context). ```bash $( echo '(echo ') ) ``` On line 1, the first occurrence of the right parenthesis does not end the subshell invocation started by `$(` because it is written between single quotes. 2.1.4 Character escaping String literals of most programming languages may contain escaping sequences to let the programmer use the entire character set including string delimiters. The backslash character typically introduces such escaping sequence as in `"\"` to insert a double quote or in `"\"` to insert a backslash. The rule of escaping is pretty simple: if a character is preceded by a backslash, it must retain its literal meaning. In a static parser for POSIX shell, this rule is significantly more complex because the nesting of double quotes and subshell invocations have an impact on the number of backslashes needed to escape a character, as shown by the following example. Example 2.5 (Number of backslashes to escape). ```bash echo " \" " ``` On line 1, a subshell is nested inside a double-quoted string literal: in the subshell invocation, the first occurrence of the character `"` is not escaped even though it is preceded by a backslash; on the contrary, the second occurrence of `"` is escaped because it is preceded by two backslashes. The command starting on line 2 illustrates the dependency between the number of backslashes required to escape a character and the nesting depth of subshell invocations. 2.1.5 Here-documents Depending on the parsing context, the lexer must switch to a special mode to deal with here-documents. Here-documents are chunks of text embedded in a shell script. They are commonly used to implement some form of template-based generation of files (since they may contain variables). To use that mode, the user provides textual end-markers and the lexer then interprets all the input up to an end-marker as a single token of the category of words. The input characters are copied verbatim into the representation of the token, with the possible exception of quotations which may still be recognized exactly as in the normal lexing mode. Example 2.6 (Here-documents). ```bash cat > notifications << EOF Hi $USER! EOF ``` On line 1, the subshell is nested inside a double-quoted string literal: in the subshell invocation, the first occurrence of the character `"` is not escaped even though it is preceded by a backslash; on the contrary, the second occurrence of `"` is escaped because it is preceded by two backslashes. The command starting on line 2 illustrates the dependency between the number of backslashes required to escape a character and the nesting depth of subshell invocations. In this example, the text on lines 2 and 3 is interpreted as a single word which is passed as input to the cat command. The first cat command of line 5 is fed with the content of line 6 while the second cat command of line 5 is fed with the content of line 8. This example with two successive here-documents illustrates the non-locality of the lexing process: the word related to the end-marker EOF1 is recognized several tokens after the introduction of EOF1. This non-locality forces some form of forward declaration of tokens, the contents of which is defined afterwards. 2.2 Parsing-dependent lexical analysis While the recognition of tokens is independent from the parsing context, their classification into words, operators, newlines and end-of-file markers must be refined further to recognize the tokens actually used in the formal grammar specified by the standard. The declaration of these tokens is reproduced in Figure 1. While a chunk categorized as an operator is easily transformed into a more specific token like AND_IF or OR_IF, an input chunk categorized as a word can be promoted to a reserved word or to an assignment word only if the parser is expecting such a token at the current position of the input; otherwise the word is not promoted and stays a WORD. This means that the lexical analysis has to depend on the state of the parser. The following two sections describe this specific aspect of the shell syntax. 2.2.1 Parsing-sensitive assignment recognition The promotion of a word to an assignment depends both on the position of this word in the input and on the string representing that word. The string must be of the form \texttt{w=u} where the substring \texttt{w} must be a valid name, a lexical category defined in Section 3.235 of the standard by the following sentence: Figure 1. The tokens of the shell language grammar. Hi Jane! EOF2 Example 2.7 (Promotion of a word as an assignment). On line 1, the word CC=gcc is recognized as a word assignment of gcc to CCC because CC is a valid name for a variable, and because CC=gcc is written just before the command name of the simple command make. On line 2, the word CC=cc is not promoted to a word assignment because it appears after the command name of a simple command. On line 4, since "./X" is not a valid name for a shell variable, the word "./X=1" is not promoted to a word assignment and is interpreted as the command name of a simple command. 2.2.2 Parsing-sensitive keyword recognition A word is promoted to a reserved word if the parser state is expecting this reserved word at the current point of the input: Example 2.8 (Promotion of a word to a reserved word). On line 1, the words for, in, do, done are recognized as reserved words. On line 2, they are not recognized as such because they appear in position of command arguments for the command ls. In addition to this promotion rule, some reserved words can never appear in the position of a command. Example 2.9 (Forbidden position for specific reserved words). The word else must be recognized as a reserved word and the parser must reject this input. 2.2.3 Richly structured semantic values The semantic value of a word can be complex since it can be made of subshell invocations, variables and literals. As a consequence, even though the grammar considers a word as an atomic piece of lexical information, its semantic value is represented by a dedicated concrete syntax tree. Example 2.10 (Forbidden position for specific reserved words). This script is a single word read as an ASSIGNMENT_WORD by the grammar. The semantic value of this lexeme is a sequence of a double-quoted sequence followed by a literal. The double-quoted sequence is itself composed of a subshell invocation represented by the concrete syntax tree of its command, followed by a variable that uses the default value bar when expanded. 2.3 Evaluation-dependent lexical analysis The lexical analysis also depends on the evaluation of the shell script. Indeed, the alias builtin command of the POSIX shell amounts to the dynamic definition of macros which are expanded just before lexical analysis. Therefore, even the lexical analysis of a shell script cannot be done without executing it, that is, lexical analysis of unrestricted shell scripts is undecidable. Fortunately, restricting the usage of the alias command to top level commands only (that is, outside of any control structure) and performing expansion of these aliases in a preprocessing pass of the parser allows us to implement a simple form of alias expansion without endangering decidability. Example 2.11 (Lexical analysis is undecidable). ```bash if ./foo; then alias x="ls" else alias x="" fi x for i in a b; do echo $i; done ``` To decide if `for` in line 6 is a reserved word, a lexer must be able to know the success of an arbitrary program `./foo`, which is impossible to do statically. Hence, the lexer must wait for the evaluation of the first command before parsing the second one. ```bash if ./foo; then alias x="ls" else alias x="" fi x for i in a b; do echo $i; done ``` If the shell script only uses `alias` at the top level, the parser can maintain a table for aliases and apply on-the-fly a substitution of aliases by their definitions just before the lexical analysis. Notice that this substitution introduces a desynchronization between the positions of tokens in the lexing buffer and their actual positions in the source code: this complicates the generation of precise locations in error messages. Another problematic feature of the shell language is eval. This builtin constructs a command by concatenating its arguments, separated by spaces, and then executes the constructed command in the shell. In other words, the construction of the command that will be executed depends on the execution of the script, and hence cannot be statically known by the parser. 2.4 Ambiguous grammar The grammar of the shell language is given in Section 2.10 of the standard. Due to lack of space we only reproduce a fragment of it in Figure 2. At first sight, the specification seems to be written in the input format of the Yacc parser generator. However, Yacc cannot handle this specification as-is for two reasons: (i) the specification is annotated with nine special rules which are not directly expressible in terms of Yacc’s parsing mechanisms; (ii) the grammar contains LR(1) conflicts. 2.4.1 Special rules The nine special rules of the grammar are actually the place where the parsing-dependent lexical conventions are explained. By lack of space, we only focus on the Rule 4 to give the idea. This is an excerpt from the standard describing this rule: ``` [Case statement termination] When the TOKEN is exactly the reserved word esac, the token identifier for esac shall result. Otherwise, the token WORD shall be returned. ``` The grammar refers to that rule in the following case: ``` pattern: WORD /* Apply rule 4 */ | pattern ';' WORD /* Do not apply rule 4 */; ``` Roughly speaking, this annotation says that when the parser is recognizing a pattern and when the next token is the specific WORD `esac`, then the next token is actually not a WORD but the token `Esac`. In that situation, one can imagine that an LR parser must pop up its stack to a state where it is recognizing the non terminal `case_clause` defined as follows: ``` case_clause: Case WORD linebreak in linebreak case_list Esac | Case WORD linebreak in linebreak case_list_ns Esac | Case WORD linebreak in linebreak Esac ``` to conclude the recognition of the current `case_list`. 2.4.2 LR(1) conflicts Our LR(1) parser generator detects five shift/reduce conflicts in the YACC grammar of the standard. All these conflicts are related to the analysis of newline characters in the body of case items in case analysis. Indeed, the grammar is not LR(1) with respect to the handling of these newline characters. Here is the fragment of the grammar that is responsible for these conflicts: ``` compound_list: linebreak term | linebreak term separator; ``` ``` sep_list: case_list_ns : case_list case_item ns | case_item ns; ``` ``` case_list: case_list ns | case_item; case_list ns : ``` ``` pattern: pattern); linebreak | pattern); compound_list ``` An LR parser cannot choose between reducing the term into a `compound_list` or shifting the NEWLINE to start the recognition of the final separator of the current `compound_list`. Fortunately, as the newline character has no semantic meaning in the shell language, choosing between reduction or shift has no significant impact on the output parse tree. 3 Unorthodox parsing Our parser library is designed for a variety of applications, including statistical analysis of the concrete syntax of scripts (see, for instance, Section 4.2). Therefore, contrary to parsers typically found in compilers or interpreters, our parser does not produce an abstract syntax tree from a syntactically correct source but a parse tree instead. A parse tree, or concrete syntax tree, is a tree whose nodes are grammar rule applications. Because we need concrete syntax trees (and also, as we shall see, because we want high assurance about the compliance of the parser with respect to the POSIX standard), reusing an existing parser implementation was not an option, as said in the introduction. Our research project required the reimplementation of a static parser from scratch. Before entering the discussion about implementation choices, let us sum up a list of the main requirements that are implied by the technical difficulties explained in Section 2: (i) lexical analysis must be aware of the parsing context and of some contextual information like the nesting of double quotes and subshell invocations; (ii) lexical analysis must be defined in terms of token delimitations, not in terms of token (regular) languages recognition; (iii) the syntactic analysis must be able to return the longest syntactically valid prefix of the input; (iv) the parser must be reentrant; (v) the parser must forbid certain specific applications of the grammar production rules; (vi) the parser must be able to switch between the token recognition process and the here-document scanner. In addition to these technical requirements, there is an extra methodological one: the mapping between the POSIX specification and the source code must be as direct as possible. The tight interaction between the lexer and the parser prevents us from writing our syntactic analyzer following the traditional design found in most textbooks [2], that is a pipeline of a lexer followed by a parser. Hence, we cannot use either the standard interfaces of code generated by LEX and Yacc, because these interfaces have been designed to fit this traditional design. There exists alternative parsing technologies, e.g. scannerless generalized LR parsers or top-down general parsing combinators, that could have offered elegant answers to many of the requirements enumerated previously, but as we will explain in Section 7, we believe that none of them fulfill the entire list of these requirements. In this situation, one could give up using code generators and fall back to the implementation of a hand-written character-level parser. This is done in DASH for instance: the parser of DASH 0.5.7 is made of 1569 hand-crafted lines. of C code. This parser is hard to understand because it is implemented by low-level mechanisms that are difficult to relate to the high-level specification of the POSIX standard: for example, lexing functions are implemented by means of gotos and complex character-level manipulations; the parsing state is encoded using activation and deactivation of bit fields in one global variable; some speculative parsing is done by allowing the parser to read the input tokens several times, etc. Other implementations, like the parser of Bash, are based on a YACC grammar extended with some code to work around the specificities of shell parsing. We follow the same approach except on two important points. First, we are stricter than Bash with respect to the POSIX standard: while Bash is using an entirely different grammar from the standard, we literally cut-and-paste the grammar rules of the standard into our implementation to prevent any change in the recognized language. Second, in Bash, the amount of hand-written code that is accompanying the YACC grammar is far from being negligible. Indeed, we counted approximately 5000 extra lines of C to handle the shell syntactic peculiarities. In comparison, our implementation only needed approximately 1000\(^3\) lines of OCaml to deal with them. Of course, these numbers should be taken with some precaution since OCaml has a higher abstraction level than C, and since Bash implements a significant extension of the shell language. Nonetheless, we believe that our design choices greatly help in reducing the amount of ad hoc code accompanying the YACC grammar of the POSIX standard. The next sections try to give a glimpse of the key aspects of our parser implementation. 3.1 A modular architecture Our main design choice is not to give up on modularity. As shown in Figure 3, the architecture of our syntactic analyzer is similar to the common architecture found in textbooks as we clearly separate the lexing phase and the parsing phase in two distinct modules with clear interfaces. Let us now describe the original aspects of this architecture. As suggested by the illustration, we decompose lexing into two distinct subphases. The first phase called “prelexing” is implementing the “token recognition” process of the POSIX standard. As said earlier, this parsing-independent step classifies the input characters into three categories of “pretokens”: operators, words and potentially significant layout characters (newline characters and end-of-input markers). This module is implemented using OCAMLEX, a lexer generator distributed with the OCAML language. In Section 3.2, we explain which features of this generator we use to get a high-level implementation of lexical conventions close to the informal description of the specification. \(^3\)The total number of lines of code is 2141, including type definitions, utilities and infrastructure. organization of the lexical rules, we were able to separate the lexer into a set of entry points where each entry point refers to a specific part of the POSIX standard. This structure of the source code eases documentation and code reviewing, hence it increases its reliability. Second, each entry point of the lexer can be parameterized by one or several arguments. These arguments are typically used to have the lexer track contextual information along the recognition process. Combined with recursion, these arguments provide to lexers the same expressiveness as deterministic pushdown automata. This extra expressive power of the language allows our lexer to parse nested structures (e.g. parenthesized quotations) even if they are not regular languages. In addition, the parameters of the lexer entry points make it possible for several lexical rules to be factorized out in a single entry point. Last but not least, the prelexer context is flexible enough to maintain the word-level concrete syntax trees mentioned in Section 2.2.3. 3.3 Incremental and purely functional parsing Yacc-generated parsers usually provide an all-or-nothing interface: when they are run, they either succeed and produce a semantic value, or they fail if a syntax error is detected. Once invoked, these parsers take control and do not give it back unless they have finished their computation. During its execution, a parser calls its lexer to get the next token but the parser does not transmit any information during that call since the lexer is usually independent from parsing. As we have seen, in the case of the shell language, when the lexer needs to know if a word must be promoted to a keyword or not, it must inspect the parser context to determine if this keyword is an acceptable token at the current position of the input. Therefore, the conventional calling protocol of lexers from parsers is not adapted to this situation. Fortunately, the Menhir [21] parser generator has been recently extended by François Pottier to produce an incremental interface instead of the conventional all-or-nothing interface. In that new setting, the caller of a parser must manually provide the input information needed by this parser for its next step of execution and the parser gives back the control to its caller after the execution of this single step. Hence, the caller can implement a specific communication protocol between the lexer and the parser. In particular, the state of the parser can be transmitted to the lexer. This protocol between the incremental parser generated by Menhir and the parsing engine is specified by a single type definition: ``` 1 type 'a checkpoint = private 2 | InputNeeded of 'a env 3 | Shifting of 'a env * 'a env * bool 4 | AboutToReduce of 'a env * production 5 | HandlingError of 'a env 6 | Accepted of 'a 7 | Rejected ``` A value of type `'a checkpoint` represents the entire immutable state of the parser generated by Menhir. The type parameter `'a` is the type of semantic values produced by a successful parsing. The type `'a env` is the internal state of the parser which roughly speaking contains the stack and the current state of the generated LR pushdown automaton. As specified by this sum type, there are six situations where the incremental parser generated by Menhir interrupts itself to give the control back to the parsing engine: (i) InputNeeded means that the parser is waiting for the next token. By giving back the control to the parsing engine and by exposing a parsing state of type `'a env`, the lexer has the opportunity to inspect this parsing state and decide which token to transmit. This is the property we exploit to implement the parsing-dependent lexical analysis. (ii) Shifting is returned by the generated parser just before a shift action. We do not exploit this particular checkpoint. (iii) AboutToReduce is returned just before a reduce action. We exploit this checkpoint to implement the treatment of reserved words. (See Section 3.3.1.) (iv) HandlingError is returned when a syntax error has just been detected. We do not exploit this checkpoint. (v) Accepted is returned when a complete command has been recognized. In that case, if we are not at the end of the input file, we reiterate the parsing process on the remaining input. (vi) Rejected is returned when a syntax error has not been recovered by any handler. This parsing process stops on an error message. Now that the lexer has access to the state of the parser, how can it exploit this state? Must it go into the internals of LR parsing to decipher the meaning of the stack of the pushdown automaton? Actually, a far simpler answer can be implemented most of the time: the lexer can simply perform some speculative parsing to observationally deduce information about the parsing state. In other words, to determine if a token is compatible with the current parsing state, the lexer just executes the parser with the considered token to check whether it produces a syntax error, or not. If a syntax error is raised, the lexer backtracks to the parsing state that was just before the speculative parsing execution. If the parsing engine of Menhir were imperative, then the backtracking process required to implement speculative parsing would necessitate some machinery to undo parsing side-effects. Since the parsing engine of Menhir is purely functional we do not need such a machinery: the state of the parser is an explicit immutable value passed to the parsing engine which returns in exchange a fresh new parsing state without modifying the input state. The API to interact with the generated parser is restricted to only two functions: ``` 1 val offer: 2 'a checkpoint -> token * position * position 3 -> 'a checkpoint 4 val resume: 5 'a checkpoint -> 'a checkpoint ``` The function offer is used when the checkpoint is exactly of the form InputNeeded. In that specific case, the argument is a triple of type token * position * position passed to the generated parser. The function resume is used for the other cases to give the control back to the generated parser without transmitting any new input token. From the programming point of view, backtracking is as cheap as declaring a variable to hold the state to recover it if a speculative parsing goes wrong. From the computational point of view, thanks to sharing, the overhead in terms of space is negligible and the overhead in terms of time is reasonable since we never transmit more than one input token to the parser when we perform such speculative parsing. Another essential advantage of immutable parsing states is the fact that the parsers generated by \texttt{Menhir} are reentrant by construction. As a consequence, multiple instances of our parser can be running at the same time. This property is needed because the prelexer can trigger new instances of the parser to deal with subshell invocations. Notice that the parsing of subshell invocations are not terminated by a standard end-of-file marker: indeed, they are usually stopped by the closing delimiter of the subshell invocation. For instance, parsing \texttt{echo $(date \textasciitilde }) requires a subparser to be executed after \texttt{\textasciitilde} and to stop before \texttt{).} As it is very hard to delimit correctly subshell invocation without parsing their content, this subparser is provided the entire input suffix and it is responsible for finding the end of this subshell invocation by itself. The input suffix is never syntactically correct. Thus, when a subparser encounters the closing delimiter of the subshell invocation (the closing parenthesis in our example), it will produce a syntax error. To tackle this issue, our parser can be run in a special mode named “longest valid prefix”. In that mode, the parser returns the longest prefix of the input that is a valid complete command. This feature is similar to backtracking and is as easy as implement thanks to immutable parsing states. ### 3.3.1 Recognizing reserved words In this section, we describe our technique to handle the promotion of words to reserved words in a parsing-context sensitive way as well as the handling of promoted words which generate syntax errors. As explained earlier, this technique intensively uses the fact that the parser generated by \texttt{Menhir} is incremental and purely functional. Let us first show the code of the function which decides whether to promote a word into a reserved word: ```ocaml let recognize_reserved_word_if_relevant = fun checkpoint pstart pstop w -> FirstSuccessMonad.( try let kw' = keyword_of_string w in let kw = (kw', pstart, pstop) in if accepted_token checkpoint kw' then return kw with Not_found -> raise Not_found | T T_NAME , Name w when is_reserved_word w -> | T T_WORD , Word w when is_reserved_word w -> ``` Line 3 declares that this function is in the \texttt{FirstSuccessMonad}, the details of which are not important here. On line 5, a lookup in a table detects if the word \( w \) is an actual keyword. If not, the exception \texttt{Not_found} is raised. Otherwise, the corresponding keyword token \texttt{kwd} is passed to the function \texttt{accepted_token} to determine if the promotion of \( w \) to \texttt{kwd} does not introduce a syntax error. If the token is not accepted, \texttt{Not_found} is raised. The exception handler on line 11 classifies \( w \) as a name if it falls into a specific lexical category. The definition of \texttt{accepted_token} is: ```ocaml let accepted_token checkpoint token = match checkpoint with | InputNeeded _ -> close (offer checkpoint token) | _ -> false ``` If the parser is in a state where an input is needed we offer it the token. The resulting new checkpoint is passed to the following recursive function \texttt{close} to determine if a syntax error is detected by the parser: ```ocaml let rec close checkpoint = match checkpoint with | AboutToReduce _ -> close (resume checkpoint) | Rejected | HandlingError _ -> false | Accepted _ | InputNeeded _ | Shifting _ -> true ``` Notice that this function always terminates since the recursive call to \texttt{close} is done just before a reduction which always consumes some entries at the top of the pushdown automaton stack. This speculative parsing solves the problem of reserved words only partially. Indeed, if a keyword is used where a \texttt{cmd_word} or a \texttt{cmd_name} is expected, that is as the command of a \texttt{simple_command}, it must be recognized as a reserved word even though it generates a syntax error. Therefore, the function \texttt{recognize_reserved_word_if_relevant} is counterproductive in that case because it will prevent the considered word from being promoted to a reserved word and would fail to detect the expected syntax error. Thanks to the \texttt{AboutToReduce} case, we are able to detect \textit{a posteriori} that a word, which has not been promoted to a reserved word, has been used to produce a \texttt{cmd_word} or a \texttt{cmd_name}: ```ocaml let analyse_top = fun a when is_reserved_word w -> | T T_NAME , Name w when is_reserved_word w | T T_WORD , Word w when is_reserved_word w -> ``` Let us explain this code. First, it is a pattern-matching branch for the case \texttt{AboutToReduce}. Conceptually, the argument named \texttt{env} represents the stack of the LR pushdown automaton and the argument named \texttt{production} is a descriptor for the reduction that is about to happen. On Lines 2 and 3, we first check that this production is indeed a rule whose left-hand-side (the produced nonterminal) is either a \texttt{cmd\_name} or a \texttt{cmd\_word}. In that case, we extract the topmost element of the automaton stack: it must be a token \texttt{name \ or \ word}. We just have to check that the semantic values of these tokens are not reserved words to determine if a syntax error must be raised or if the parsing can go on. ### 3.4 From the code to the POSIX specification What makes us believe that our approach to implement the POSIX standard will lead to a parser that can be trusted? Actually, as the specification is informal, it is impossible to prove our code formally correct. We actually do not even claim the absence of bugs in our implementation: this code is far too immature to believe that. In our opinion, our approach to develop \textsc{Morbig} is likely to lead to a trustworthy implementation because (i) its code is written in such a way that it facilitates code review; (ii) it includes the formal shell grammar of the POSIX as-is; (iii) it has been tested with a rigorous method; (iv) it seems to behave like POSIX-compliant shells. **Code review** Comments represent 40\% of the \textsc{Morbig} source code. We tried to quote the POSIX specification related to each code fragment so that a code reviewer can evaluate the adequacy between the implementation and its interpretation of the specification. We also document every implementation choice we make and we explain the programming technique used to ease the understanding of the unorthodox part of the program, typically the speculative parsing. **Cut-and-paste of the official shell grammar** We commit ourselves to not modifying the official BNF of the grammar despite its incompleteness or the exotic nine side rules described earlier. In our opinion, it is a strength of our approach because this BNF is the most declarative and formal part of the specification, knowing that our generated parser recognizes the same language as this BNF is in our opinion a reason to trust our implementation. **Testsuite** \textsc{Morbig} comes with a testsuite which follows the same structure as the specification: for every section of the POSIX standard, we have a directory containing the tests related to that section. At this time, the testsuite is relatively small since it is only made of 149 tests. A code reviewer may still be interested by this testsuite to quickly know if some cornercase of the specification has been tested and, if not, to contribute to the testsuite by the insertion of a test for this cornercase. **Comparison to existing shell implementations** To disambiguate several paragraphs of the standard, we have checked that the behavior of \textsc{Morbig} coincides with the behavior of shell implementation which are believed to be POSIX-compliant, typically \textsc{DASH} and \textsc{Bash} (in POSIX mode). ### 4 Applications #### 4.1 Shell parsing toolkit There are two interfaces to the \textsc{Morbig} parser: a Command Line Interface (CLI) and an Application Programming Interface (API). The CLI of \textsc{Morbig} is an executable program called \texttt{morbig}. It takes as input a list of filenames and, for each syntactically correct input file, it produces a JSON file containing a textual representation of the concrete syntax tree of the shell script. To use the API of \textsc{Morbig}, a programmer writes an OCAML program linked to a library called \texttt{libmorbig}. The parsing API of \textsc{Morbig} is contains just one function: ```ocaml val parse : string -> CST . complete_command list ``` The API is richer when it comes to analyzing and transforming concrete syntax trees. Indeed, in addition to the type definitions for the concrete syntax trees, the module \texttt{CST} defines several classes of \texttt{visitors}. The visitor design pattern \cite{10} is an object-oriented programming technique to define a computation over one or several mutually recursive object hierarchies. The next section explains the advantages of defining an analysis with such visitors. In the API, six classes of visitors are provided: \texttt{iter} to traverse a CST, \texttt{map} to transform a CST into another CST, \texttt{reduce} to compute a value by a bottom-up recursive computation on a CST, as well as \texttt{iter2}, \texttt{map2} and \texttt{reduce2} which traverse two input CSTs of similar shapes at the same time. These visitors come for free as we use a preprocessor \cite{20} which automatically generates visitors classes out of type definitions. #### 4.2 An analyzer for Debian maintainer scripts The original motivation for the \textsc{Morbig} parser comes from a research project on the development of formal methods for the verification of the so-called \texttt{maintainer scripts} present in the Debian GNU/Linux distribution. As a first step of this project, we need a statistical analysis of the corpus in order to know what elements of the shell language and what UNIX commands with which options are mostly used in our corpus. It is easy to implement such an analysis operating on the concrete syntax trees produced by \textsc{Morbig}. Individual analysers are written using the visitor design pattern \[10\] in order to cope with the 108 distinct cases in the type of concrete syntax trees. 5 Current limitations and future work An important issue is how to validate our parser. Counting the number of scripts that are recognized as being syntactically correct is only a first step since it does not tell us whether the syntax tree constructed by the parser is the correct one. We can imagine several ways how the parser can be validated. One approach is to write a pretty-printer which serializes the concrete syntax tree constructed by the parser. The problem is that our parser has dropped part of the layout present in the shell script, in particular information about spaces, and comments. Still, a pretty printer can be useful to a human when verifying the correct action of the parser on a particular case of doubt: this is the technique we used to build our testsuite. It might also be possible to compare the result obtained by our pretty-printer with the original script after passing both through a simple filter that removes comments and normalizes spaces. Furthermore, a pretty-printing functionality can be used for an automatic smoke test on the complete corpus: the action which consists of parsing a shell script and then pretty-printing it must be idempotent, that is performing it twice on a shell script must yield the same result as performing it once. Another possible approach is to combine our parser with an interpreter that executes the concrete syntax tree. This way, we can compare the result of executing a script obtained by our interpreter with the result obtained by one of the existing POSIX shell interpreters. Finally, the scripts of our corpus may not cover all the diversity of the shell scripts that can be found in the nature since they are dedicated to a very specific task, namely package maintenance. We are currently working on a new corpus of 7.5 millions of shell scripts extracted from the archive of The Software Heritage project\[7\]. 6 Availability and Benchmarks Morbig is Free Software, published under the GPL3 license. It is available at https://github.com/colis-anr/morbig as an OPAM package. On a 17-4600U CPU @ 2.10GHz with 4 cores, an SSD hard drive and 8GB of RAM, it takes 41s\(^4\) to parse successfully the 31521 scripts of the corpus (which represents 99% of the 31832 files of the corpus) and to serialize the correspond concrete syntax trees on the disk. The average time to parse a script from the corpus of Debian maintainer scripts is therefore 1.3ms (with a standard deviation which is less than 1% of this duration). The maximum parsing time is 100ms, reached for the pre rm script of package w3c-sgmmlib_1.3-1_all which is 1121 lines long. 7 Related work 7.1 About the POSIX shell language Analysis of package maintainer scripts To our knowledge, the only existing attempt to analyze a complete corpus of package maintainer scripts was done in the context of the Mancoosi project \[6\]. An architecture of a software package installer is proposed that simulates a package installation on a model of the current system in order to detect possible failures. The authors have identified 52 templates which cover completely 64.3% of all the 25.440 maintainer scripts of the Debian Lenny release. These templates are then used as building blocks of a DSL that abstracts maintainer scripts. In this work, a first set of script templates had been extracted from the relevant Debian toolset (DEBHELPER), and then extended by clustering scripts using the same statements \[8\]. The tool used in this works is geared towards comparing shell scripts with existing snippets of shell scripts, and is based on purely textual comparisons. Analysis of shell scripts There have been few attempts to formalize the shell. Recently, Greenberg \[11\] has presented elements of formal semantics of POSIX shell. The work behind Abash \[17\] contains a formalization of the part of the semantics concerned with variable expansion and word splitting. The Abash tool itself performs abstract interpretation to analyze possible arguments passed by Bash scripts to UNIX commands, and the tool itself performs abstract interpretation to analyze possible arguments passed by Bash scripts to UNIX commands, and thus to identify security vulnerabilities in Bash scripts. Several tools can spot certain kinds of errors in shell scripts. The checkbashisms\[5\] script detects usage of Bash-specific syntax in shell scripts, it is based on matching Perl regular expressions against a normalized shell script text. This tool is currently used in Debian as part of the lintian package analyzing suite. The tool shellcheck\[12\] detects error-prone usage of the shell language. This tool is written in Haskell with the parser combinator library Parsec. Therefore, there is no Yacc grammar in the source code to help us determine how far from the POSIX standard the language recognized by shellcheck is. Besides, the tool does not produce intermediate concrete syntax trees which forces the analyses to be done on-the-fly during parsing itself. This approach lacks modularity since the integration of any new analysis requires the modification of the parser source code. Nevertheless, as it is hand-crafted, the parser of shellcheck can keep a fine control on the parsing context: this allows for the generation of very precise and helpful error messages. We plan to use the recent new ability \[19\] of Menhir to obtain error messages of similar quality. 7.2 About parsing technologies General parsing frameworks Menhir\[21\] is based on a conservative extension of LR(1)\[16\], inspired by Pager’s algorithm: it produces pushdown automata almost as compact as LALR(1) automata without the risk of introducing LALR(1) conflicts. As a consequence, the resulting parsers are both efficient (word recognition has a linear complexity) and reasonable in terms of space usage. However, the set of LR(1) languages is a strict subset of the set of context-free languages. For context-free languages which are not LR(1), there exist well-known algorithms like Earley's [4, 9], GLR[24], GLL[22] or general parser combinator algorithms[15]. These algorithms can base their decision on an arbitrary number of lookups, can cope with ambiguous grammars by generating parse forests instead of parse trees, and generally have a cubic complexity. There also exist parsing algorithms and specifications that go beyond context-free grammars, e.g. reflective grammars [23] or data-dependent grammars [1]. Since the grammar of POSIX shell is ambiguous, one may wonder why we stick to an LR(1) parser instead of choosing a more general parsing framework like the ones cited above. First, as explained in Section 2.4, the POSIX specification embeds a Yacc grammar specification which is annotated by rules that change the semantics of this specification, but only locally by restricting the application of some of the grammar rules. Hence, if we forget the shift/reduce conflicts mentioned in Section 2.4, this leads us to think that the author of the POSIX specification actually have a subset of an LR(1) grammar in mind. Being able to use an LR(1) parser generator to parse the POSIX shell language is in our opinion an indication that this belief is true. Second, even though we need to implement some form of speculative parsing to efficiently decide if a word can be promoted to a reserved word, the level of non-determinism required to implement this mechanism is quite light. Indeed, it suffices to exploit the purely functional state of our parser to implement a back-tracking point just before looking at one or two new tokens to decide if the context is valid for the promotion, or not. This machinery is immediately available with the interruptible and purely functional LR(1) parsers produced by Menhir. In our opinion, the inherent complexity of generalized parsing frameworks is not justified in this context. **Scannerless parsing** Many legacy languages (e.g. PL/1, COBOL, FORTRAN, R, ...) enjoy a syntax which is incompatible with the traditional separation between lexical analysis and syntactic analysis. Indeed, when lexical conventions (typically the recognition of reserved words) interact in a nontrivial way with the parsing context, the distinction between lexing and parsing fades away. For this reason, it can perfectly make sense to implement the lexical conventions in terms of context-free grammar rules and to mix them with the language grammar. With some adjustments of the GLR parsing algorithm to include a longest-match strategy and with the introduction of specification mechanisms to declare layout conventions efficiently, the ASF+SDF project[25] has been able to offer a declarative language to specify modular scannerless grammar specifications for many legacy languages with parsing-dependent lexical conventions. Unfortunately, as said in Section 2.1.1, the lexical conventions of POSIX shell are not only parsing-dependent but also specified in a “negative way”: POSIX defines token recognition by characterizing how tokens are delimited, not how they are recognized. Besides, as shown in Section 2.1.2, the layout conventions of POSIX shell, especially the handling of newline characters, are unconventional, hence they hardly match the use cases of existing scannerless tools. Finally, lexical conventions depend not only on the parsing context but also on the nesting context as explained in Section 2.1.4. For all these reasons, we are unable to determine how these unconventional lexical rules could be expressed following the scannerless approach. More generally, it is unclear to us if the expressivity of ASF+SDF specifications is sufficient to handle the POSIX shell language without any extra code written in a general purpose programming language. **Schrödinger's tokens** Schrödinger's tokens[3] is a technique to handle parsing-dependent lexical conventions by means of a superposition of several states on a single lexeme produced by the lexical analysis. This superposition allows to delay to parsing time the actual interpretation of an input string while preserving the separation between the scanner and the parser. This technique only requires minimal modification to parsing engines. Morph's promotion of words to reserved words follows a similar path: the prelexer produces pretokens which are similar to Schrödinger's tokens since they enjoy several potential interpretations at parsing time. The actual decision about which is the right interpretation of these pretokens as valid grammar tokens is deferred to the lexer and obtained by speculative parsing. No modification of Menhir's parsing engine was required thanks to the incremental interface of the parsers produced by Menhir: the promotion code can be written on top of this interface. ### 8 Conclusion Statically parsing shell scripts is notoriously difficult, due to the fact that the shell language was not designed with static analysis in mind. Nevertheless, we found ourselves in need of a tool that allows us to easily perform a number of different statistical analyses on a large number of scripts. We have written a parser that maintains a high level of modularity, despite the fact that the syntactic analysis of shell scripts requires an interaction between lexing and parsing that defies traditional approach. ### Acknowledgment We are grateful to the reviewers of the different versions of this paper. Their comments helped us to improve the paper as well as the implementation. We also thank Patricio Pelliccione and Davide Di Ruscio for discussion of their work. done in the context of the Mancoosi project. This work has been supported by the French national research project ANR CoLiS (contract ANR-15-CE25-0001). References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01890044/file/main.pdf", "len_cl100k_base": 12109, "olmocr-version": "0.1.46", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45748, "total-output-tokens": 14297, "length": "2e13", "weborganizer": {"__label__adult": 0.0002772808074951172, "__label__art_design": 0.00028967857360839844, "__label__crime_law": 0.0001939535140991211, "__label__education_jobs": 0.00033020973205566406, "__label__entertainment": 6.139278411865234e-05, "__label__fashion_beauty": 0.00010144710540771484, "__label__finance_business": 0.0001118779182434082, "__label__food_dining": 0.00022971630096435547, "__label__games": 0.00040531158447265625, "__label__hardware": 0.0005002021789550781, "__label__health": 0.00021088123321533203, "__label__history": 0.00015497207641601562, "__label__home_hobbies": 5.608797073364258e-05, "__label__industrial": 0.000202178955078125, "__label__literature": 0.00025773048400878906, "__label__politics": 0.0001583099365234375, "__label__religion": 0.000316619873046875, "__label__science_tech": 0.0065155029296875, "__label__social_life": 6.473064422607422e-05, "__label__software": 0.006908416748046875, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0001575946807861328, "__label__transportation": 0.00022709369659423828, "__label__travel": 0.00013339519500732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62195, 0.0217]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62195, 0.66962]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62195, 0.89712]], "google_gemma-3-12b-it_contains_pii": [[0, 4753, false], [4753, 10364, null], [10364, 15441, null], [15441, 19321, null], [19321, 24078, null], [24078, 26808, null], [26808, 29713, null], [29713, 35516, null], [35516, 40940, null], [40940, 46457, null], [46457, 52157, null], [52157, 58146, null], [58146, 62195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4753, true], [4753, 10364, null], [10364, 15441, null], [15441, 19321, null], [19321, 24078, null], [24078, 26808, null], [26808, 29713, null], [29713, 35516, null], [35516, 40940, null], [40940, 46457, null], [46457, 52157, null], [52157, 58146, null], [58146, 62195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62195, null]], "pdf_page_numbers": [[0, 4753, 1], [4753, 10364, 2], [10364, 15441, 3], [15441, 19321, 4], [19321, 24078, 5], [24078, 26808, 6], [26808, 29713, 7], [29713, 35516, 8], [35516, 40940, 9], [40940, 46457, 10], [46457, 52157, 11], [52157, 58146, 12], [58146, 62195, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62195, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
f9fa65082778ec148fe5f03f5dc7b9a10a41e2d2
Memory Management Instructor: Dr. Tongping Liu Outline - Simple memory management: swap etc. - Virtual memory and paging - Page table and address translation - Translation lookaside buffer (TLB) - Multi-level page table - Page replacement algorithms and modeling - Working set of processes Memory Hierarchy - CPU can directly access main memory and registers only - But programs and data must be brought (from disk) into memory - Memory accesses can be the bottleneck - Cache between memory and CPU registers - Memory Hierarchy - Cache: small, fast, expensive; SRAM - Main memory: medium-speed, not that expensive; DRAM - Disk: many gigabytes, slow, cheap, non-volatile storage Memory - The ideal memory is - Very large - Very fast - Non-volatile (doesn’t go away when power is turned off) - The real memory is - Not very large - Not very fast - Affordable (cost)! ⇒ Pick any two... - Memory management goal: make the real world look as much like the ideal world as possible CPU can directly access main memory and registers only But programs and data must be brought (from disk) into memory Memory accesses can be the bottleneck - Cache between memory and CPU registers Memory Hierarchy - Cache: small, fast, expensive; SRAM - Main memory: medium-speed, not that expensive; DRAM - Disk: many gigabytes, slow, cheap, non-volatile storage The ideal memory is - Very large - Very fast - Non-volatile (doesn’t go away when power is turned off) The real memory is - Not very large - Not very fast - Affordable (cost)! ⇒ Pick any two... Memory management goal: make the real world look as much like the ideal world as possible Limitations without virtual memory Protection of memory: using base and limit registers Many Questions: 1. How to generate the addresses for a process? 2. How to assign physical memory? Swapping Consider a multi-programming environment: - Each program must be in the memory to be executed - Processes come into memory and - Leave memory when execution is completed A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution - **Backing store** – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images - **Roll out, roll in** – swapping variant used for priority-based scheduling algorithm; lower-priority process is swapped out so higher-priority process can be loaded and executed. Swapping would be needed to free up memory for additional processes. Swapping (cont’d) - Major part of swap time is transfer time; - Total transfer time is directly proportional to the amount of memory swapped (e.g., 10MB process / 40MB per sec = 0.25 sec) - May take too much time to be used often - When the old process is swapped in, can we relocate it? (depends on address binding) - What if the swapped out process was waiting for I/O - Let OS kernel handle all I/O, extra copy from kernel to user space - Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows), but it is often disabled Outline - Simple memory management: swap etc. - Virtual memory and paging - Page table and address translation - Translation lookaside buffer (TLB) - Multi-level page table - Kernel memory management - Working set of processes Virtual Memory - Basic idea: allow OS to allocate more memory than the real - Program uses virtual addresses - Addresses local to the process - Can be any size limited by # of bits in address (32/64) - 32 bits: 4G - 64 bits: - Virtual memory >> physical memory Motivations for Virtual Memory - Use physical DRAM as cache for the disk - Virtual pages of processes can exceed physical memory size - Simplify memory management - Multiple processes resident in main memory - Each with its own address space - Only “active” code and data is actually in memory - Provide protection - One process can’t interfere with another - Because they operate in different address spaces - User process cannot access privileged information - Different sections of address spaces have different permissions Virtual Memory for Multiprogramming - Virtual memory (VM) is helpful in multiprogramming - Multiple processes in memory concurrently - Each process occupies small portion of memory - CPU schedules process B while process A waits for its long I/O operations (e.g., retrieve data from disks) - Physical Memory de/allocation - Keep recently used content in physical memory - Move less recently used stuff to disk - Movement to/from disk handled by the OS How to get physical address from the virtual one?! Virtual and Physical Addresses - Virtual address space - Determined by instruction width - Same for all processes - Physical memory indexed by physical addresses - Limited by bus size (# of bits) - Amount of available memory Paging: a memory-management scheme that permits address space of process to be non-continuous. Paging and Page Systems - Virtual address - Divided into pages - Physical memory - Divided into frames - Page vs. Frame - Same size address block - Unit of mapping/allocation - A page is mapped to a frame - All addresses in the same virtual page are in the same physical frame - Offset in a page Page Table - Each process has one page table - Map page number → physical frame number - Number of PTEs in page table - Number of total pages in virtual space - Not just the pages in use, why? - Page table is checked for every address translation - Where to store page table? - Not all pages need map to frames at the same time - Not all physical frame need be used Translate Virtual to Physical Address - Split virtual address (from CPU) into two pieces - Page number (p) - Page offset (d) - Page number: Index into an entry of the page table, with addresses of physical pages - Page offset: Position inside a page - Page size = $2^d$ bytes: determined by offset size An Example of Virtual/Physical Addresses Example: - 64 KB virtual memory - 32 KB physical memory - 4 KB page/frame size → 12 bits as offset (d) Address Translation Virtual address: 16 bits Page #: 4 bits Offset: 12 bits Physical address: 15 bits Frame #: 3 bits Offset: 12 bits How many virtual pages? How many physical frames? Exercise A tiny computing system with 1K bytes physical memory, and the virtual address has 12 bits (4096 bytes). Suppose that the size of virtual page/frame is 128 bytes (i.e., with 7 bits as the page offset). - What is the number of virtual pages for each process? 32 pages - How many physical frames in total? 8 frames - How many entries in page table for each process? 32 entries Page Table Entry (PTE) Each entry in the page table contains - Frame number: bits depends on # of frames in physical memory - Valid bit: set if a page has a corresponding physical frame in memory - If not valid, remainder of PTE is irrelevant - Referenced bit: set if data on the page has been accessed - Dirty (modified) bit: set if data on the page has been modified - Protection information - Protection - Dirty bit - Referenced bit - Valid bit - Frame number Size of each PTE: at least frame number plus 4 bits Page Table Related with Context Switches Different processes have different page tables - CR3 points to the page table - Change CR3 registers when context switches Page table resides in main (physical) memory - Continuous memory segment - Why? Address Translation Architecture How big the page table is? Examples for Address Translation 1K bytes physical memory, and the virtual address has 12 bits, and the size of a page is 128 bytes <table> <thead> <tr> <th>Virtual Address</th> <th>Physical Address</th> </tr> </thead> <tbody> <tr> <td>0x0044</td> <td></td> </tr> <tr> <td>1. Size of a page (128 B)</td> <td></td> </tr> <tr> <td>2. Page index and offset (4, 0x44)</td> <td></td> </tr> <tr> <td>3. Frame index (2)</td> <td></td> </tr> <tr> <td>4. Starting address of the frame (2 * 0x80)</td> <td></td> </tr> <tr> <td>5. Physical address (2 * 0x80 + 0x44) = 0x144</td> <td></td> </tr> </tbody> </table> 1K bytes physical memory, and the virtual address has 12 bits, and the size of a page is 128 bytes <table> <thead> <tr> <th>Virtual Address</th> <th>Physical Address</th> </tr> </thead> <tbody> <tr> <td>0x0224</td> <td></td> </tr> <tr> <td>1. Size of a page (128 B)</td> <td></td> </tr> <tr> <td>2. Page index and offset (4, 0x24)</td> <td></td> </tr> <tr> <td>3. Frame index (3)</td> <td></td> </tr> <tr> <td>4. Starting address of the frame (3 * 0x80)</td> <td></td> </tr> <tr> <td>5. Physical address (3 * 0x80 + 0x24) = 0x1A4</td> <td></td> </tr> </tbody> </table> Examples for Address Translation Quiz for Address Translation 1K bytes physical memory, and the virtual address has 12 bits, and the size of a page is 128 bytes <table> <thead> <tr> <th>Virtual Address</th> <th>Physical Address</th> </tr> </thead> <tbody> <tr> <td>0x0136</td> <td></td> </tr> <tr> <td>1. Size of a page (128 B)</td> <td></td> </tr> <tr> <td>2. Page index and offset (3, 0x36)</td> <td></td> </tr> <tr> <td>3. Frame index (3)</td> <td></td> </tr> <tr> <td>4. Starting address of the frame (3 * 0x80)</td> <td></td> </tr> <tr> <td>5. Physical address (3 * 0x80 + 0x36) = 0x1A4</td> <td></td> </tr> </tbody> </table> Page Table Size for 32bit System - Modern Systems/Applications - 32 bits virtual address - System with 1GB physical memory \( \rightarrow 30 \) bits physical address - Suppose the size of one page/frame is 4KB (12 bits) - Page table size - # of virtual pages: \( 32 - 12 = 20 \) bits - 20 bits physical address - 30 bits physical address - Suppose the size of one page/frame is 4KB (12 bits) - Page table size - \( \# \text{ of virtual pages: } 32 - 12 = 20 \) bits - Page table size = PTE size \( \times \) 2 - If there are 128 processes - Page tables occupy \( 128 \times 4MB = 512 \) MB - \( 50\% \text{ of memory will be used by page tables}?! \) **Outline** - Simple memory management: swap etc. - Virtual memory and paging - Page table and address translation - Translation lookaside buffer (TLB) - Multi-level page table - Track free memory: bitmaps or linked list - Page replacement algorithms and modeling - Working set of processes - Other implementation issues - How can we get smaller page table?! Two-Level Page Tables - Solution: multi-level page tables - Virtual address: three parts - Level-one page number (10 bits) - Level-two page number (10 bits) - Offset (12 bits) - PTE in 1st level page table contains physical frame # for one 2nd level page table - 2nd level page table has actual physical frame numbers for the memory address - Why it is good? - We don’t have to allocate all levels initially, which reduces the size of page table - They don’t have to be continuous How can we get smaller page table?! Example: 2-level Address Translation - Page number: \( p_1 \) = 10 bits - Page offset: \( p_2 \) = 10 bits - Frame number: offset = 12 bits Which tables should be in memory? Memory Requirement of Page Tables - Only the 1st level page table and the required 2nd level page tables need to be in memory - 32bit machine, page size 4k, each entry 4 bytes, using two-level page table, what is the size for the full page table - Level-0: 1024 entries * 4 bytes - Level-1: 1024 * 1024 entries = 1M entries * 4 bytes - Total: 4M + 4K Page table size - 32bit machine, page size 4k, each entry 4 bytes, one level page table (full 4GB linear address) - Page table size = \( 2^{20} \) pages = \( 2^{22} \) = 4M - 32bit machine, page size 4k, each entry 4 bytes, two level page table (two pages: 0x00000000, and 0xFFFFF000) - Page table size = \((2^{10} \text{ level-0 entries}) \times 4\text{bytes} + (2^{10} \text{ level-1 entries} \times 4\text{bytes}) \times 2 = 12\text{ Kbytes}\) Memory Requirement of Page Tables - Example: a process access 32 MB (recall that 1GB memory and 32 bits virtual address), what is the minimum and maximum memory for page table - 4KB / page \( \Rightarrow \) process has at most 8K virtual pages - One 2nd level page table maps 2^10 pages; - Computing the minimum memory consumption - Minimum number of 2-level page entries needed (all pages are continuous): - 8K Virtual pages \( 2^{10} = 8 \) - Total (minimum) memory for page table: 1st level page table + 8; in total, we need 9 pages to hold page tables - \( 9 \times 4KB = 36\text{ KB}\) Memory Requirement of Page Tables Example: a process access 32 MB (recall that 1GB memory and 32 bits virtual address), what is the minimum and maximum memory for page table - 4KB / page → process has at most 8K virtual pages - One 2nd level page table maps 210 pages; Computing the maximum memory consumption - 8K or more virtual pages will spread to all 2nd level page tables, which only has 1024 pages in total - Thus, in total, we will have 1 page for 1st level page table + 1024 pages for 2nd level. Thus, we will have the maximum as 4M+4K Quiz - Why the page table has to be physically continuous? - If we have two pages (0x00000000 and 0x0020100)), what is the size of page table? - What are the tradeoffs to have small page size and large page size? - What are the advantages and disadvantages of using a single-level page table? Fragmentation - External Fragmentation - total memory space exists to satisfy a request, but it is not contiguous - Internal Fragmentation - allocated memory larger than requested; this size difference is called internal partition. - How can we reduce external fragmentation - Compaction: migrate memory contents to place all free memory together in one large block - Compaction is possible only if relocation is dynamic, and is done at execution time Paging: Internal Fragmentation - Calculating internal fragmentation - Page size = 2,048 bytes - Process size = 72,766 bytes - 35 pages + 1,086 bytes - Internal fragmentation of 2,048 - 1,086 = 962 bytes - Worst case fragmentation = 1 frame – 1 byte - On average fragmentation = 1 / 2 frame size - So small frame sizes desirable? → more entries - Each page table takes memory to track - Page sizes growing over time - Solaris supports two page sizes – 8 KB and 4 MB Size of Page/Frame: How Big? - Determined by number of bits in offset (12Bit→4KB) - Smaller pages have advantages - Less internal fragmentation - Better fit for various data structures, code sections - Larger pages are better because - Less overhead to keep track of them - More efficient to transfer larger pages to and from disk - One principle: all entries of one-level page table → fit into one frame Designing of Multi-level Page Table - Suppose that a system has a 28-bit logical address space and is byte-addressable. The amount of physical memory is 1MB (i.e., the physical address has 20 bits) and the size of a page/frame is 1K bytes. How to design a two-level page table? 1. The size of page table entry will be 4 bytes (larger than 20 bits) 2. One page can hold 256 entries (1K/4=256) 3. Thus, we need 8 bits for the 2nd level page table - Suppose that a system has a 28-bit logical address space and is byte-addressable. The amount of physical memory is 1MB (i.e., the physical address has 20 bits) and the size of a page/frame is 1K bytes. What about this design? - Second level 10 bits, can’t be fitted into one page. - Then we may need to have multiple pages (4 pages) that are continuous, which can’t guarantee or increase the complexity of OS design Linux’s 3 level page table Linear Address converted to physical address using 3 levels - Index into Page Dir. - Index into Page Middle Dir. - Index into Page Table - Page Offset What is the benefit to use 3-level page table? What is the shortcoming? Benefits: 1. Reduce memory consumptions 2. Support different architectures, x86:2levels, SPARC: 3-levels. Easily collapse Problems: 1. Expensive looking up How can we make the address translation faster? Translation Lookaside Buffer (TLB) - Small Hardware: fast - Store recent accessed mapping of page → frame (64 ~ 1024 entries) - If desired logical page number is found, get frame number from TLB - If not: - Get frame number from page table in memory - Use standard cache techniques - Replace an entry in the TLB with the logical & physical page numbers from this reference - Contains complete page table entries for small number of pages Address Translation with TLB What happens when cpu performs a context switch? Integrating VM and Cache - Most caches are "physically addressed" - Accessed by physical addresses - Allows multiple processes to have blocks in cache at same time else context switch == cache flush - Allows multiple processes to share pages - Cache doesn’t need to be concerned with protection issues - Access rights checked as part of address translation - Perform address translation before cache lookup - Could involve memory access itself (to get PTE) - So page table entries can also be cached Integrating TLB and Cache "Translation Lookaside Buffer" (TLB) Basic overflow 1. CPU will generate an virtual address 2. Then we will check the TLB to find the mapping of this page. If the corresponding entry exists, then we could perform the translation and get the PA address. Otherwise, we will go through the slow address translation, and then save the mapping into TLB (typically after the translation). 3. After getting the PA address, we can check whether the corresponding cache line is in the cache or not. If yes, then we will have a cache hit and return the memory unit back to CPU. 4. Otherwise, it is a cache miss. We will fetch the cache line into the cache, and then return the memory unit back to CPU Memory Accesses Time - Assuming: - TLB lookup time = a - Memory access time = m - Hit ratio (h) is percentage of time that a logical page number is found in the TLB - More TLB entries usually means higher h - Effective Access Time (EAT) is calculated (don’t include cache effect) - EAT = (m + a)h + (m + m + a)(1-h) = a + (2-h)m - Interpretation - Reference always requires TLB lookup, 1 memory access - TLB misses also result in one additional memory reference Example of Memory Access Time - Assuming TLB has an access time of 4 ns and the memory access time is 20 ns. The disk access time is 8 ms. Page/frame is 1K bytes. <table> <thead> <tr> <th>Page number</th> <th>TLB entry</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> </tr> <tr> <td>2</td> <td>3</td> </tr> <tr> <td>3</td> <td>4</td> </tr> <tr> <td>4</td> <td>5</td> </tr> </tbody> </table> DEMAND PAGING Demand Paging - Bring a page into memory only when it is needed - Less I/O needed - Less memory needed - Faster response - More users - Page is needed ⇒ reference to it - invalid reference ⇒ abort - not-in-memory ⇒ bring to memory Valid-Invalid Bit - With each page table entry a valid–invalid bit is associated, - \( v \) ⇒ in-memory, - \( i \) ⇒ not-in-memory - Initially bit is set to \( i \) on all entries - During address translation, if valid–invalid bit in page table entry is \( i \) ⇒ page fault (trap) **Page Fault** 1. Reference to a page, *If invalid reference ⇒ abort* 2. If not in memory, page fault occurs *(trap to OS)* 3. OS allocates an empty frame 4. Swap page into frame 5. Reset page tables, *set validation bit = v* 6. **Restart the instruction** that caused the page fault --- **Performance of Demand Paging** - **Page Fault Rate** $0 \leq p \leq 1.0$ - If $p = 0$ no page faults - If $p = 1$, every reference is a fault - **Effective Access Time (EAT)** \[ EAT = (1 - p) \times \text{memory access} + p \times \text{page fault time} \] - **page fault time** depends on several factors - Save user reg and proc state, check page ref, read from the disk there might be a queue, (CPU can be given to another proc), get interrupt, save other user reg and proc state, correct the page table, put this process into ready queue..... Due to queues, the page fault time is a random variable --- **Demand Paging Example** - Memory access time $= 200$ nanoseconds - Average page-fault service time $= 8$ milliseconds - \[ EAT = (1 - p) \times 200 + p \times 8 \text{ milliseconds} \] \[ = (1 - p) \times 200 + p \times 8,000,000 \] \[ = 200 + p \times 7,999,800 \] - If one out of 1,000 access causes a page fault, then \[ EAT = 8.2 \text{ microseconds} \] This is a slowdown by a factor of 40!! - If we want just 10% performance degradation, then $p$ should be \[ 220 > (1 - p) \times 200 + p \times 8 \text{ milliseconds} \] \[ < 0.0000025 , \text{i.e., 1 page fault out of 400,000 accesses} \] Thrashing - If a process does not have "enough" pages, the page-fault rate is very high. - E.g. a process needs 6 pages, but only have 5 frames. Thus it will evict a page from existing 5 pages. Frequent faults This leads to: - low CPU utilization - OS increase the degree of multiprogramming - another process added to the system, worse case Locality and Thrashing - To prevent thrashing we should give enough frames to each process - But how much is "enough" Locality model - Process migrates from one locality to another - Localities may overlap When \( \sum \) size of locality > total memory size, thrashing occurs... Increase locality in your programs! Working-Set Model - Working-set window \( \Delta \) = a fixed number of page references, example: 10,000 instruction - WSS (working set of Process \( P \)) = total number of pages referenced in the most recent \( \Delta \) - if \( \Delta \) too small, will not encompass entire locality - if \( \Delta \) too large, will encompass localities - if \( \Delta = \infty \) \( \Rightarrow \) will encompass entire program - \( D = \sum \) WSS = total demand frames - if \( D > \) (available frames) \( m \) \( \Rightarrow \) Thrashing - Thus, if \( D > m \), then suspend one of the processes Working-Set Definition - Informal Definition: the collection of pages that a process is working with, and which must thus be resident if the process is to avoid thrashing. - The idea is to use the recent needs of a process to predict its future needs: - Choose \( \Delta \), the working set parameter. At any given time, all pages referenced by a process in its last \( \Delta \) seconds of execution are considered to comprise its working set - Pages outside the working set may be discarded at any time. Keeping Track of the Working Set - Approximate with interval timer + a reference bit - Example: \( \Delta = 10,000 \) - Timer interrupts after every 5000 time units - Keep reference bit and in-memory bit for each page - At a timer interrupt, copy and set the values of all reference bits to 0 - If one of the bits in memory = 1 \( \Rightarrow \) page in working set Why is this not completely accurate? - Improve = 10 bits and interrupt every 1000 time units Balance Set - Working set is not enough to control thrashing - If the sum of the working sets of all runnable processes is greater than the size of memory, then refuse to run some of the processes - Divide runnable processes up into two groups: active and inactive. - When a process is made active its working set is loaded, when it is made inactive its working set is allowed to migrate back to disk. - The collection of active processes is called the balance set Page-Fault Frequency (PFF) Scheme - Working set is a clumsy way to control thrashing - PFF is a more direct way - High PFF \( \rightarrow \) more thrashing - Establish "acceptable" page-fault rate - If actual rate is too low, process loses frame - If actual rate is too high, process gains frame - Suspend a process if PFF is above upper bound and there is no free frames! What happens if there is no free frame? - Terminate a user program - Swap out some page PAGE REPLACEMENT Page Replacement - No free frame in memory, a page needs to be replaced. - Pages that are replaced might be needed again later. - We need algorithms to minimize the number of page faults. - Include other improvement, e.g., use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk Page Replacement Algorithms - How to select the victim frame? - You can select any frame, the page replacement will work; but the performance??? - Gives the lowest page-fault rate - Evaluate an algorithm by running it on a particular string of memory references (reference string) and compute the number of page faults on that string In all our examples, we will have 3 frames and the following reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Basic Page Replacement - Find the desired page on disk - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm 1. Select a victim frame, swap it out (use dirty bit to swap out only modified frames) 2. Bring the desired page into the (newly) free frame; 3. Update the page and frame tables Restart the process Page Replacement Algorithms - How to select the victim frame? - You can select any frame, the page replacement will work; but the performance??? - Gives the lowest page-fault rate - Evaluate an algorithm by running it on a particular string of memory references (reference string) and compute the number of page faults on that string In all our examples, we will have 3 frames and the following reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 First-In-First-Out (FIFO) Algorithm - Maintain an FIFO buffer - The page used before may not be needed - An array used early, might be used again and again - Easy to implement - Belady’s Anomaly: more frames ⇒ more page faults reference string 7 7 7 0 3 2 2 4 4 2 3 0 0 2 1 1 0 7 7 page frames FIFO Illustrating Belady’s Anomaly Reference string (12 accesses) 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Optimal Algorithm - Replace a page that will not be used for longest time How do you know the future? - Used for measuring how well your algorithm performs Least Recently Used (LRU) Algorithm - Use recent past as an approximation of the future - Select the page that is not used for a long time... - OPT if you look at from backward - NO Belady’s Anomaly: so more frames ⇒ less page faults Given the reference string of page accesses: 1 2 3 4 2 3 4 1 2 1 1 3 4 and a system with three page frames, what is the final configuration of the three frames after the true LRU algorithm is applied? Problem of LRU: - How to implement it efficiently? - Full LRU needs to sort all time of reference. LRU Algorithm (Cont.) - Counter (logical clock) implementation - Increase the counter every time a page is referenced - Save it into time-of-use field - Find one with the smallest time-of-use value - Problems: Counter overflow and linear search performance - Stack implementation – keep a stack of page numbers in a double link form: - Page referenced: - Move it to the top - Requires 6 pointer ops to be changed - No search for replacement - Least recently used one is at the bottom LRU Approximation Algorithms - Reference bit - With each page associate a reference bit, initially = 0 - When page is referenced, set this bit to 1 by hardware - Replace the one which is 0 (if one exists) - We do not know the order, however - Additional bits can help to gain more ordering information - Second chance Alg (uses one reference bit) - FIFO with an inspection of ref bit - If ref bit is 0, - Replace that page - If ref bit is 1, give a second chance - Leave page in memory - Arrival time set to current time - Go to next one Global vs. Local Allocation - Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another - High priority processes can take all frames from low priority ones (cause thrashing) - A process cannot control its page fault rate - Local replacement – each process selects from only its own set of allocated frames - Consistent performance - Lower utilization of memory and less throughput Summary: Page Replacement Algorithms <table> <thead> <tr> <th>Algorithm</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td>FIFO (First-In, First-Out)</td> <td>Might throw out useful pages</td> </tr> <tr> <td>Second chance</td> <td>Big improvement over FIFO</td> </tr> <tr> <td>LRU (Least Recently Used)</td> <td>Excellent, but hard to implement exactly</td> </tr> <tr> <td>OPT (Optimal)</td> <td>Not implementable, but useful as a benchmark</td> </tr> </tbody> </table> ALLOCATING KERNEL MEMORY Treated differently from user memory (allocate 1 page even when 1 byte is needed) Often allocated from a different free-memory pool Kernel requests memory for structures of varying sizes Some kernel memory needs to be contiguous Kernel Memory Allocation - E.g., Linux PCB (struct task_struct) - > 1.7 Kbytes each - Created on every fork and every thread create - clone() - deleted on every exit - Kernel memory allocators - Buddy system - Slab allocation Buddy System (Dividing) Free Page’s List Two continuous blocks with the same size the first one will start as 2^n **Page Allocation** Example: Need to allocate 65 contiguous page frames. 1. Look in list of free 128-page-frame blocks. 2. If free block exists, allocate it, else look in next highest order list (here, 256-page-frame blocks). 3. If first free block is in 256-page-frame list, allocate a 128-page-frame block and put remaining 128-page-frame block in lower order list. 4. If first free block is in 512-page-frame list, allocate a 128-page-frame block and split remaining 384 page frames into 2 blocks of 256 and 128 page frames. These blocks are allocated to the corresponding free lists. **Buddy Allocation** Example: Need to allocate 65 contiguous page frames. 1. Look in list of free 128-page-frame blocks. 2. If free block exists, allocate it, else look in next highest order list (here, 256-page-frame blocks). 3. If first free block is in 256-page-frame list, allocate a 128-page-frame block and put remaining 128-page-frame block in lower order list. 4. If first free block is in 512-page-frame list, allocate a 128-page-frame block and split remaining 384 page frames into 2 blocks of 256 and 128 page frames. These blocks are allocated to the corresponding free lists. **Question:** What is the worst-case internal fragmentation? **Buddy De-Allocation** When blocks of page frames are released the kernel tries to merge pairs of “buddy” blocks of size $b$ into blocks of size $2b$. Two blocks are buddies if: 1. They have equal size $b$. 2. They are located at contiguous physical addresses. 3. The address of the first page frame in the first block is aligned on a multiple of $2b^2$. The process repeats by attempting to merge buddies of size $2b$, $4b$, $8b$ etc… **Slab Allocator** - Performs the following functions 1. Allocate memory 2. Initialize objects/structures 3. Use objects/structures 4. Deconstruct objects/structures 5. Free memory `/proc/slabinfo` – gives full information about memory usage on the slab level. (see also `/usr/bin/slabtop`) Slab Allocator - **Slab** is one or more physically contiguous pages - **Cache** consists of one or more slabs - Single cache for each unique kernel data structure (process descriptions, file objects, semaphores) > Each cache filled with objects – instantiations of the data structure - When cache created, filled with objects marked as free - When structures stored, objects marked as used - If slab is full, next object is allocated from empty slab. If no empty slabs, new slab allocated Benefits include: - No fragmentation, - Memory request is satisfied quickly WHAT HAPPENS WHEN ALLOCATING MEMORY Memory allocation (using mmap/brk) ```c #include <stdio.h> #include <stdlib.h> int main() { int * ptr = malloc(4); *ptr = 1; free(ptr); } ``` Currently, no heap space at all because we didn’t use any heap Memory allocation ```c #include <stdio.h> #include <stdlib.h> int main() { int * ptr = malloc(4); *ptr = 1; free(ptr); } ``` Now, the heap is allocated from the kernel, which means the virtual address from 0x0804b000 to 0x0806c000 (total 33K) are usable. `ptr` is actually 0x804b008. --- Memory Mapping (mmap or brk) ```c #include <stdio.h> #include <stdlib.h> int main() { int * ptr = malloc(4); *ptr = 1; free(ptr); } ``` What we learn here? - Typically, the user will ask one big block of memory and setup its page table. - Then this memory will be managed by user space memory manager. - How to manage the memory inside user space? Summary - Simple memory management: swap etc. - Virtual memory and paging - Page table and address translation - Translation lookaside buffer (TLB) - Multi-level page table - Page replacement algorithms - Working set of processes - Kernel Memory Management
{"Source-Url": "http://www.cs.utsa.edu/~tongpingliu/teaching/cs5523/handouts/lecture-03-memory-management.pdf", "len_cl100k_base": 8732, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 64914, "total-output-tokens": 9851, "length": "2e13", "weborganizer": {"__label__adult": 0.0002703666687011719, "__label__art_design": 0.0003533363342285156, "__label__crime_law": 0.00021398067474365232, "__label__education_jobs": 0.0008025169372558594, "__label__entertainment": 6.854534149169922e-05, "__label__fashion_beauty": 0.00011211633682250977, "__label__finance_business": 0.0002135038375854492, "__label__food_dining": 0.0002624988555908203, "__label__games": 0.000759124755859375, "__label__hardware": 0.00395965576171875, "__label__health": 0.0002994537353515625, "__label__history": 0.00025391578674316406, "__label__home_hobbies": 0.00016069412231445312, "__label__industrial": 0.00047898292541503906, "__label__literature": 0.00017249584197998047, "__label__politics": 0.00017333030700683594, "__label__religion": 0.0003612041473388672, "__label__science_tech": 0.035369873046875, "__label__social_life": 6.461143493652344e-05, "__label__software": 0.01019287109375, "__label__software_dev": 0.9443359375, "__label__sports_fitness": 0.000270843505859375, "__label__transportation": 0.0004730224609375, "__label__travel": 0.00018870830535888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32514, 0.02257]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32514, 0.4829]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32514, 0.82526]], "google_gemma-3-12b-it_contains_pii": [[0, 1652, false], [1652, 3124, null], [3124, 4694, null], [4694, 6021, null], [6021, 7511, null], [7511, 8894, null], [8894, 10475, null], [10475, 12077, null], [12077, 13383, null], [13383, 15158, null], [15158, 16146, null], [16146, 17857, null], [17857, 18739, null], [18739, 20270, null], [20270, 22047, null], [22047, 23483, null], [23483, 25400, null], [25400, 26201, null], [26201, 28152, null], [28152, 28767, null], [28767, 30754, null], [30754, 31582, null], [31582, 32253, null], [32253, 32514, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1652, true], [1652, 3124, null], [3124, 4694, null], [4694, 6021, null], [6021, 7511, null], [7511, 8894, null], [8894, 10475, null], [10475, 12077, null], [12077, 13383, null], [13383, 15158, null], [15158, 16146, null], [16146, 17857, null], [17857, 18739, null], [18739, 20270, null], [20270, 22047, null], [22047, 23483, null], [23483, 25400, null], [25400, 26201, null], [26201, 28152, null], [28152, 28767, null], [28767, 30754, null], [30754, 31582, null], [31582, 32253, null], [32253, 32514, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32514, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32514, null]], "pdf_page_numbers": [[0, 1652, 1], [1652, 3124, 2], [3124, 4694, 3], [4694, 6021, 4], [6021, 7511, 5], [7511, 8894, 6], [8894, 10475, 7], [10475, 12077, 8], [12077, 13383, 9], [13383, 15158, 10], [15158, 16146, 11], [16146, 17857, 12], [17857, 18739, 13], [18739, 20270, 14], [20270, 22047, 15], [22047, 23483, 16], [23483, 25400, 17], [25400, 26201, 18], [26201, 28152, 19], [28152, 28767, 20], [28767, 30754, 21], [30754, 31582, 22], [31582, 32253, 23], [32253, 32514, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32514, 0.05225]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f64a8d4c809255f3cdd38f62b27465fcf57712e5
Bachelor thesis Computing Science Radboud University Improving OpenCRE Author: Thomas Klein Breteler s4068246 First supervisor/assessor: dr. E. Poll (Erik) erikpoll@cs.ru.nl SIG supervisor: R. van der Veer (Rob) r.vanderveer@sig.eu Second assessor: dr. I. Buhan (Ileana) illeana.buhan@ru.nl June 14, 2022 Abstract One of the goals of the OWASP Integration Standards project is to deliver a linking mechanism which connects any number of software security standards. This linking mechanism, OpenCRE, is currently in beta and while the core functionalities are working, much work must be done to improve the user experience and the linking between OpenCRE topics and standards. In this thesis, we assess how openCRE can be improved. We performed a general assessment from the perspective of a newcomer to the project as well as through interviews with multiple stakeholders. Our initial findings are that the visualisation of the complex hierarchy of topics within the CRE clear, the page usage is inefficient, much unnecessary information is shown and the documentation is ineffective. We present several suggestions in the form of mockups for change based on our assessments and interviews. The accepted suggestions include a change in the hierarchy visualisation, removal of unnecessary text and renaming of many of the content blocks. The interviews yielded several interesting new insights which were worth considering. The most promising being the introduction of explanation tooltips and the possibility of adding a navigation sidebar. A second requirement was to research the possibilities to improve the linking to standards. We provided a way to link directly to a location in an HTML document. For PDF and Markdown documents the possibilities are very limited. Finally, we analysed how to improve the coverage of CWE in OpenCRE. CWE view 699 provides an overview of all CWE entries related to software development which is the most practical and relevant to OpenCRE. Other than manually linking and analysing CWE we could not think of an effective way of improving the linking in OpenCRE. Contents 1 Introduction 3 2 Background 6 2.1 OWASP Integration Standard Project . . . . . . . . . . . . . 6 2.2 OpenCRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Human-Computer Interaction . . . . . . . . . . . . . . . . . 13 3 How to improve OpenCRE? 15 3.1 Research approach . . . . . . . . . . . . . . . . . . . . . . . 15 4 Assessing the user experience 16 4.1 Assessment of usability . . . . . . . . . . . . . . . . . . . . . 16 4.2 Documentation of OpenCRE . . . . . . . . . . . . . . . . . 20 4.3 Interviews about the user experience . . . . . . . . . . . . . 20 4.3.1 Summary of feedback points . . . . . . . . . . . . . . . 20 4.4 Suggested changes . . . . . . . . . . . . . . . . . . . . . . . 26 4.4.1 Cleaning up the topic pages . . . . . . . . . . . . . . . 26 4.4.2 Restructuring of the tree hierarchy . . . . . . . . . . . 26 4.4.3 Tooltips and explanations . . . . . . . . . . . . . . . . 28 4.4.4 Topic browsing through a sidebar . . . . . . . . . . . . 29 4.4.5 Miscellaneous changes . . . . . . . . . . . . . . . . . . 30 5 Improved deep linking 32 5.1 Deep linking . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.1.1 HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.1.2 PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.1.3 Markdown . . . . . . . . . . . . . . . . . . . . . . . . . 33 6 Improved linking to CWE 34 6.1 Charting CRE . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.2 Charting CWE . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.3 ZAP-CWE-CRE discrepancies . . . . . . . . . . . . . . . . . 36 6.4 How can the CWE-CRE linking be improved? . . . . . . . . . 37 7 Future work 8 Conclusions A Appendix A.1 Interview A.2 ZAP-CWE-CRE analysis raw data Chapter 1 Introduction There are hundreds of known security threats to IT systems and just as many measures to stop these threats. To get an idea of what to implement to prevent known and unknown threats, standardisation is required. This is done through many different software security standards which cover all kinds of topics like security requirements, testing and good practice. Some leading makers of standards for software security are ISO (International Organisation for Standardisation), NIST (National standards institute of the USA) and OWASP (An open-source, community-led non-profit foundation). OWASP alone already has over 40 different standards ranging from detailed requirement lists like the ASVS (application security verification standard) [9] to broad explanation documents like the Cheat Sheet Series [6]. With over 200 relevant national and international standards, the standard landscape is very fractured. The 2019 ENISA (EU Agency for cybersecurity) report on the advancement of software security recommended to “Develop a common repository for shared security measures” [2], as there is much overlap between different standards. This is however easier said than done. Numerous attempts to visualise or link standards have been made however with little success. One example is iotsecuritymapping.uk, which is an initiative to map IoT security standards to a limited set of IoT-related topics as shown in figure 1.1. This overview is neither practical nor maintainable as updating it has to be done manually. OpenCRE is to be a solution to the fractured landscape. It is a mechanism designed to link specific sections of different security standards to common topics. CRE stands for Common Requirement Enumeration and OpenCRE consists of many of these requirements, referred to as topics. These topics are connected in the form of a hierarchy such that it can accommodate linking to both very specific standard entries as well as very broad explanation documents. Through cooperation with the makers of standards, OpenCRE can function in a maintainable fashion which is essential when having to incorporate data from a large number of sources into a single tool. We discuss the details of OpenCRE in chapter 2. OpenCRE is currently in beta. The basic topic hierarchy is fully functional and links several standards, however fine-tuning is needed to make it a convenient and user-friendly tool. One of the goals of this thesis is to identify issues with the user experience and make recommendations on how to solve them. Furthermore, we will be looking at the In Chapter 2 we go discuss the OWASP Integration Standards project of which OpenCRE is part and the technical details of OpenCRE itself. In In Chapter 3 we discuss the goal and the scope of this thesis. In Chapter 4 we assess the user experience from the perspective of a newcomer to the project and we discuss the interviews we performed with different stakeholders of the project to get their views on how to improve OpenCRE. In Section 4.4 we make concrete suggestions on how to improve the user experience of OpenCRE. Finally, in Chapter 5 we discuss technical and strategic improvements to OpenCRE. Chapter 2 Background In Chapter 2.1 we discuss the OWASP Integration Standard project and in chapter 2.2 OpenCRE. 2.1 OWASP Integration Standard Project The OWASP Integration Standards Project is an open initiative to promote technical interaction between software security initiatives in and outside of OWASP [11]. The goal is to reduce the fragmentation and complexity of the standard landscape. The project has 4 deliverables specified. 1. A report on the software development life cycle [1] 2. The security Wayfinder: an interactive overview of different OWASP projects. See figure 2.1. 3. the common requirements enumeration (CRE): A mechanism which links content from different types of standards, bringing together requirements, testing strategies, countermeasures and repositories of weaknesses. The CRE (OpenCRE) is the focus of this thesis and we will provide more technical details in section 2.2. 4. A tool which helps integrate security initiatives into different stages of the software development lifecycle. 2.2 OpenCRE OpenCRE is a mechanism which links common requirements to different software security standards. CRE stands for Common Requirement Enumeration and these common requirements form the topics on which the mechanism is built. The topics are interconnected and form a tree hierarchy, with a series of top-level topics which have multiple child topics. The result of this is a structure which can accommodate both broad topics such as "authentication"\textsuperscript{1} as well as very specific topics such as "Mutually authenticate application and credential service provider"\textsuperscript{2}. OpenCRE has 2 focuses on use-cases: 1. The first use-case is for OpenCRE to enable a developer, tester or anyone involved in the software development process to quickly view what different standards have to say about a certain topic, navigate efficiently from one standard to another and get an overview of the standards relevant to the development process. For example, if a tester is looking through security requirements in the ASVS (the leading requirement standard) he/she can follow the link in the ASVS entry of interest to OpenCRE. In the OpenCRE he will find a list of standards linked to the same topic, among others the entry of WSTG (the leading OWASP testing guide). The tester can thus navigate from the same topics in the ASVS to WSTG without having to search for the corresponding coverage in the WSTG. 2. The second use case is to provide a comprehensive overview of a topic. This overview will provide different sources for the relevant topic as well as a comprehensive list of closely related topics. For example, if a developer wants to know more about "input and output verification"\textsuperscript{3} the corresponding topic in OpenCRE offers an overview of related topics and standards. OpenCRE can foster a better understanding of cybersecurity as a whole by helping standard makers link to other standards instead of having to cover everything themselves. It will also highlight security subjects which might be underrepresented in security standards, therefore contributing to the general understanding of the security field. The platform is in beta and is available on \url{www.OpenCRE.org}. It currently (February 2022) links 5 OWASP standards: 1. OWASP Top 10 \cite{owasp-top-10}, 10 most common security flaws. It provides a short description of the flaws and some general advice on how to fix them. 2. ASVS \cite{asvs} (Application Security Verification Standard), One of the leading OWASP projects. It’s a comprehensive list of requirements for developers. 3. OWASP Proactive controls \cite{owasp-proactive} A top 10 requirement list of must-dos for architects and developers. \textsuperscript{1}https://www.opencre.org/cre/633-428 \textsuperscript{2}https://www.opencre.org/cre/558-807 \textsuperscript{3}https://www.OpenCRE.org/cre/503-455 4. OWASP Cheatsheets [6], Explanation documents on various security subjects, with a focus on how to securely implement them. E.g. Session Management Cheat Sheet \(^4\) explains everything you need to know about session management from a security perspective. 5. WSTG (Web Security Testing Guide) [8], The leading OWASP security testing guide. and 3 other important sources and standards: 1. Common weakness enumerations (CWE) [3], A repository of around 1000 known weaknesses maintained by MITRE. Although they refer to them as weaknesses, they are more accurately generalised vulnerabilities. 2. NIST-800-53 [4], NIST\(^5\) standard on information systems in organisations. OpenCRE is not the first attempt to link standards, multiple attempts have been made but with mixed results and limited usability. OpenCRE aims to solve 3 problems encountered in previous efforts [13]. - The first problem is that linking all standards to each other is too much work and unmaintainable. See figure 2.2 \(\text{Figure 2.2: Linking every standard to an entry in another standard doesn’t work} \) \(^4\)https://cheatsheetseries.owasp.org/cheatsheets/Session_Management_Cheat_Sheet.html \(^5\)National Institute of Standards and Technology of the USA The solution to this is to create shared topics and link the standards to those topics. This way users can view a certain topic and see what different standards have to say about it as if it were a single resource. These topics can be like "logging and error handling" \(^6\), On this OpenCRE page a user can see all standards directly linked to this topic as well as other OpenCRE topics linked to it. See figure 2.3 ![Diagram showing shared topics linking standards](https://www.OpenCRE.org/cre/842-876) Figure 2.3: Shared topics linking standards [13] - The second problem is finding a certain topic in the forest of subtopics is too much work for most users (see figure 2.4). \(^6\)https://www.OpenCRE.org/cre/842-876 To remedy this, high-level topics are introduced. These high-level topics link broad standards which cover multiple subtopics at once. These high-level topics are connected to subtopics forming a hierarchy of related topics. (see figure 2.5) Figure 2.5: The topic hierarchy removes the mismatch in topic depth. - The final problem is that when OpenCRE links to a standard and if that standard changes anything, the OpenCRE link breaks or becomes incorrect. Take for example the differences between the OWASP Top 10 2017 and Top 10 2021 as visualised in figure 2.6. Mapping just the differences between 2 versions of the same standard is already a complex matter. Mapping between multiple standards makes it a lot harder and especially harder to maintain. Normally someone would have to manually change the links in OpenCRE to adopt the changes of OWASP top 10 2017 compared to OWASP top 10 2021. For a standard with just 10 entries which updates every few years, this might be doable (but still very undesirable) however, considering that if OpenCRE would incorporate 40 or so standards, maintaining the system would not be viable. The solution to this is to make the standards link to the unique CRE code and map according to these codes. This way, when a standard changes, the mapping algorithm can automatically find the location of the relevant pages and will always display the latest version. This feature has yet to be implemented as standard makers have yet to update their standards to include the relevant CRE codes. In short, have the standards link to the correct topic OpenCRE rather than the other way around. Figure 2.7 shows a schematic overview of OpenCRE. The topics are linked to each other in a hierarchy and form multiple trees which cover different subjects. Each topic is linked to several standards which cover the subject of the topic. Finally, some CRE topics are linked to a different CRE topic outside of the main hierarchy. These are "related to" relations as often 2 topics can have strong relevance to each other while not being part of the same hierarchy. For example: "Monitor unusual activities on the system" is part of the business logic hierarchy, however, this CRE is strongly related to "logging and error handling". The CRE can in this case refer to another topic to provide a more complete picture of the topic. One of the future goals of OpenCRE is to enable an analysis of the standard landscape by looking at the overlap and gap between different standards. Another goal is to perform anonymised data analysis on the use of OpenCRE. The insights gained from this can help improve the software security field. 2.3 Human-Computer Interaction Much of this thesis will involve redesigning the user interface of OpenCRE, it is worth taking a look at what is relevant in the field of human-computer interaction. Ben Shneiderman’s introduced his ‘eight golden rules of user interface design’ [12] in 1986 which remains leading in the field up to this day. The principles are as follows: 1. Strive for consistency. 2. Enable frequent users to use shortcuts. 3. Offer informative feedback. 4. Design dialogue to yield closure. 5. Offer simple error handling. 6. Permit easy reversal of actions. 7. Support internal locus of control. Of these rules, especially 2, 3 and 8 are relevant to this project as too much irrelevant information is shown and too little explanation of the topics is given. Another useful lesson to be learned from Shneiderman’s book is to consider the level of knowledge of the users. We can assume that all users will have a background in computing science and while not necessarily experts on security topics, they will have a basic understanding of computing science. This will give us some leeway in our use of language as we can use some technical terminology which would not be understood by someone outside of the field. Chapter 3 How to improve OpenCRE? The goal of this thesis is to maximise the success of OpenCRE. To do this we must first define what success entail, we discuss this in section 3.1. 3.1 Research approach The goal of this thesis is to help maximise the success of OpenCRE. The problem is that success is a vague term which can be interpreted in many ways. Thus if we want to maximise the success of openCRE we must first determine what success is in this context. OpenCRE is first and foremost a tool used to link the standard landscape. From this perspective, success can be seen as the adoption by standard makers. This means that as more standard makers link to the CRE, its value increases as it offers the bridge between standards. There have been multiple initiatives that have linked standards with little success as these lists are generally incomprehensible due to the amount of different information it tries to group and link. This is the second key success point for OpenCRE. Not only should there be a link between many different standards, but the linking should also be clear such that users can conveniently navigate between standards or get an overview of the literature on a topic of choice. To maximise the success of openCRE we will focus on making the tool more user-friendly and increasing the accuracy of the linking. By doing so standard makers will be more inclined to adopt CRE links into their standards. Chapter 4 Assessing the user experience In this chapter, we assess the user experience of OpenCRE and its documentation. We dive into OpenCRE from the perspective of a newcomer without receiving any substantial explanation to see what kind of first impression it gives in section 4.1. We assessed the documentation in section 4.2. In section 4.3 we discuss the interviews we performed with 3 stakeholders and some of the suggestions they gave. Finally, in section 4.4 we make suggestions on what and how to improve the user experience of OpenCRE. 4.1 Assessment of usability The CRE project is currently in beta, most core functionalities have been implemented and are working. However, many improvements are necessary for it to become the product it was designed to be. In this section, we will assess different parts of OpenCRE to see what can be improved and make suggestions on how to improve these. As part of the first assessment we covered 2 use cases: 1. Access OpenCRE through the link in an included standard such as ASVS 6.1 2. Use the text search or browse feature to find information on a certain topic. In both cases, the focus was on the content, front-end, appearance and user experience. The initial finding was that the topic pages (i.e. https://www.opencre.org/cre/842-876) are hard to read and confusing. The reasons are as follows: 1. The inverted tree structure is confusing. Tree structures are normally top-down, having the parent node above the child node. In OpenCRE \[\text{https://github.com/northpole/ASVS/blob/04316f240bc1f7bad058394a40d183c34d14521f/4.0/en/0x14-V6-Cryptography.md}\] Figure 4.1: Different visualisations of a tree structure the choice was made to visualise the tree recursively from the bottom up. This representation was chosen to show the most closely related topics first, however, the downside is that navigation becomes less clear. Figure 4.1 shows the difference between a regular tree and an ‘inverted’ tree. 2. Relations between topics and standards in OpenCRE are explained through a sentence like: ”x is related to:” or ”x is linked to”. However, these textual relations are not explained anywhere and it is not always clear what they mean. Combining this with the previous point makes a topic page hard to understand for first-time users especially. 3. Every topic has a unique identification code connected to them. However, these codes make no real sense to users as they are semi-random. They only contribute to cluttering the screen with numbers. 4. Aside from the codes, there is a lot of unnecessary text and prefixes on the screen which makes the screen feel cluttered. Additionally, there are several miscellaneous issues which can negatively affect the user experience: 1. To navigate from the page showing an overview of all topics linked to one standard (i.e. https://www.opencre.org/node/standard/ASVS/section/V5.3.8) to another standard linked to the same OpenCRE topic, the user has to go through 3 different pages. According to the golden rules of Shneiderman, it is very desirable to allow users to have shortcuts, thus having a way to immediately access different standards would be ideal. 2. Search results are unsorted and unstructured. i.e. https://www.opencre.org/search/asvs shows an unsorted list of ASVS pages. 3. The function of the "related" category is unclear in the current layout. The "related to" blocks tend to be scattered about the page without clear intent. 4. No explanation of the topics and standards. The user is expected to know what all the terms mean and what the different standards are for. 5. Long lists of standards reduce the readability. OpenCRE shows all standards connected to the topic you are viewing, standards connected to "related" topics and standards connected to topics in the parent topics. This can result in large lists of sometimes up to 10 different entries of the same standard. Grouping and hiding these lists will decrease the amount of text shown and greatly increase the readability of the page. Figure 4.2 shows a page which features most of the above-mentioned issues. --- See for example https://www.opencre.org/cre/153-513 --- 2See for example https://www.opencre.org/cre/153-513 In section 4.4 we discuss ways to solve these issues. To solve the issues found we tried multiple alternative layouts however the results were unsatisfactory. The topic pages contain a lot of information and the options to properly visualise them without losing the oversight it is to create are limited. The most obvious solution is to make the tree into a regular top-down tree, this takes away the unintuitive aspect of it. Once the tree can be visualised intuitively, the need for a textual description of the relations is no longer needed. This allows us to reduce the amount of text on the page. The CRE codes can also simply be removed as they add no value. 4.2 Documentation of OpenCRE To promote the adoption of OpenCRE, its usage should be clear to new users and more importantly, standard makers. A clear and accessible front page and documentation are essential to achieve this. Currently, the front page is pretty much an about page, which is fine as users will likely not access or bookmark the front page if they use OpenCRE. The explanation video and document presented on the front page are more of an introductory presentation than a dedicated explanation. Making both shorter and more to the point would make things a lot more accessible to newcomers. Especially the explanation of and solution to ”problem 2” (section 2.2 on page 10) was very confusing to me. The explanation document provides argumentation for the design choices, however, the function of the hierarchy was not elaborated upon. The document was updated to reflect on the design choices. Figures 2.4 and 2.5 are the product of this finding. 4.3 Interviews about the user experience To get a broader understanding of how to improve the user experience of OpenCRE we interviewed 3 employees of SIG (Software Improvement Group) with various backgrounds about their general impression of OpenCRE and how they think it could be improved. The first interviewee has worked on OpenCRE before and is well versed in the security field, the second is an expert in the security field but is only familiar with OpenCRE on a conceptual level and the final interviewee is neither a security expert nor familiar with OpenCRE. We performed a semi-structured interview which lasted 30-45 minutes and provided several links to OpenCRE and mockups to discuss. In appendix A we provide the general questions of the interview and in section 4.3.1 we discuss the various ideas the interviewees come up with. 4.3.1 Summary of feedback points 1. More explanation for the topics: A tooltip (hover-over pop-up window) which shows 1 or 2 lines of description to explain what the topic is about without having to go into it. Especially to someone who is not a security expert, some terms used might not be clear to these users which will make using OpenCRE hard for them. Descriptions can be provided by a hover-over, an icon/button which displays it or an explanation line behind the topic name. Figure 4.3 shows how --- CWE pages utilise tooltips to provide much information unobtrusively. Figure 4.3: CWE pages utilise tooltips extensively 2. Textual relations are confusing. What do "part of" and "related to" mean? This is the same issue which was identified in section 4.1 and the fact that the interviewees pointed this out as well underlines the need to change this. An obvious way to implement this is to adopt a parent/child naming scheme. Since the stakeholders are predominantly people with a technical background, it is safe to assume they will be familiar with tree structures and how the naming of these works. As an added upside, it will be immediately clear that the CRE topic hierarchies are trees, making the internal structure clear to the user. 3. Hiding long lists of standards behind a collapsible button/box would be useful. 4. On the architecture page there is a long list of topics. This could use some guidance to help the user figure out where to start. A solution could be to group topics which are related. The general problem is that a new user would not know where to start and providing natural groupings would reduce the number of options a user would have to choose between. 5. Completely separate all relations. Make blocks for child, parent, grandparent, linked and related topics/standards. Figure 4.4: Mockup showcasing the possibility of separating all blocks 6. Make it clear what the related topics relate to by including them in the block it belongs to. In some of the early mockups used during the interviews, the related topics were moved outside of the main structure. Some interviewees preferred the related blocks to be kept in the blocks of the topics they belong to. Figure 4.5: Mockup separating the topics of the main hierarchy but including "related" in the connected topic 7. more colour coding to make clear which section is which. No reading would be required, especially for people familiar with the system. For example, someone would know or learn to recognise the red block as the linked standards, green as the related pages, blue as the leaves, and yellow as the parents. This can also be done with icons like on CWE pages. 8. Reconsider the order of blocks. Maybe move the "containing" (child) block up as users will be more likely to be interested in the underlying topics rather than the overarching topics. 9. A sidebar like in "readthedocs" might give more oversight and the possibility to quickly access the top-level topics. Figure 4.8: Mockup of CRE page showing an alternate ordering of blocks Figure 4.9: "readthedocs" has an extensive sidebar used to navigate through topics 4.4 Suggested changes We have identified several issues in sections 4.1 and 4.3, in this section we will now provide suggestions on how to improve OpenCRE to solve these issues. The goal is to provide sufficient context for the suggested changes to be implemented by the developers. 4.4.1 Cleaning up the topic pages All unnecessary text and codes should be left out as more information for users to process make a page harder to read. See figure 4.10 ![Figure 4.10: Unnecessary codes and text highlighted on the topic page of "encrypt data at rest"](image) Suggested changes are: 1. The unique CRE topic code "xxx-xxx" have no value to users and can be removed without losing any functionality. 2. "Tag:cryptography" can be removed as these tag labels also appear as a "related to" category. Thus the line adds no value. 3. "CRE:" prefix before codes or topic names can be removed as this is considered jargon and does not add value to the user. 4.4.2 Restructuring of the tree hierarchy The current version of OpenCRE uses an inverted tree structure (As shown in figure 4.1). This way of visualising the hierarchy is counter-intuitive and we suggest using a regular tree structure instead. The difference with the original layout is that “session management” and “session lifecycle” have been swapped. (figure 4.11) ![Figure 4.11: Normalised tree and added basic colour distinction](image) Alternatively, we can completely separate the tree nodes and explain the hierarchy by use of common terminologies such as parent, grandparent and child. Adopting the naming scheme of a tree structure will immediately make the hierarchy clear to anyone familiar with tree structures and as the users will likely have a technical background, it can be assumed they will be familiar with this. As a bonus, this change will also eliminate the vagueness of the terms ”is part of” and ”contains” which are currently used to describe child and parent nodes. (figure 4.12) Adopting this way of visualising eliminates the problems with the ordering of the topics as we now use common terminology rather than visual tricks to explain the structure. Using the style as shown in figure 4.12 is preferred as it is completely clear what the relations are, while figure 4.11 still has some ambiguity. ### 4.4.3 Tooltips and explanations The topics used in OpenCRE have a great variety in depth. Some are very specific requirements while others are broad concepts. Take for example ”session management”, ”session lifecycle” and ”terminate all sessions when password is changed”. For the latter, it is clear what is meant by the topic just by reading its name. ”Session management” and ”Session lifecycle” are quite broad and vague. This would require a user to click on the topic and infer based on the standards linked to it and topics connected to it to see what it is actually about. This vagueness can be easily countered by introducing a single line of explanation. We propose adding a small explanation line to each topic which explains what the topic entails. One way this can be done is by using tooltips and icons like they are being used in CWE pages (see Figure 4.3). These tooltips can be shown when hovering the cursor over the topic name or by adding a small icon after the topic. This way explanation is shown to those who want it and it will not clutter the screen with more text. 4.4.4 Topic browsing through a sidebar At the moment there is no easy way to access the list of top-level topics or to freely navigate between topics which are not directly connected within the tree hierarchy. As suggested by an interviewee we can implement this in a way similar to www.readthedocs.org (figure 4.9). See figure 4.13 for a mockup. Figure 4.13: Proposal for an OpenCRE page with a navigation sidebar As the use of the sidebar is purely situational, we suggest adding the possibility to hide it (or to hide it by default). See figure 4.14 4.4.5 Miscellaneous changes Aside from the bigger changes mentioned in previous sections, there are several smaller changes which would improve OpenCRE but are not big enough for a separate section: 1. Add a hyperlink icon to standards to enable users to immediately navigate to the standard without having to go through the OpenCRE page for the standard. Users will likely be more interested in going immediately to the standard than to view a page which shows which topics are connected to a specific OpenCRE topic. An icon is unobtrusive and will provide this functionality without having to change much in the system. 2. Rename ”Is linked to” to ”refers to” when referring to the standards connected to a CRE topic. This more accurately describes the function of the block as topics refer to certain standards. ”linking” is jargon used by the developers of OpenCRE which is not immediately clear to new users. See figure 4.15 3. Collapse "related to" blocks by default. Currently, everything is expanded which results in the topic pages being big and chaotic. Collapsing the related topics clean up the page considerably. 4. Sorting search results alphabetically is something very easy and will greatly improve the readability of search results. Another improvement to search results is introducing the grouping of entries from the same source. For example, searching for something which yields 13 NIST entries and 5 other sources can be shown as having results from 6 different sources where 1 (NIST) has 13 entries. Chapter 5 Improved deep linking OpenCRE is a linking platform, it links specific entries of security standards to common topics. However, there is no standardisation in how security standards are published, which results in standards being published in many different formats like HTML pages, PDF documents and markdown pages. In this chapter, we will discuss what deep linking is and how it is done for the previously mentioned document types. 5.1 Deep linking Deeplinking is a way of linking to a specific page on a website or a specific location on a page. www.opencre.org links to the OpenCRE homepage, while https://www.opencre.org/cre/402-133 deep links to a specific OpenCRE topic. One of the main features of OpenCRE is that it links to specific pages of standards to take away the need to look through an entire document to find what you are looking for. Not all standards, however, have a format which allows for this conveniently. Some standards have a plain HTML page and others are only available as PDF. We searched for ways to facilitate deep linking to specific points in both HTML and PDF documents. In sections 5.1.1, 5.1.2 and 5.1.3 we present our findings for these different document formats. 5.1.1 HTML Deep linking in a specific location in an HTML page is fairly easy as every section in an HTML is marked by an ‘id’ and adding #id to the link makes the page load at the location of the id in the page. The id can be found in the source code of the page. For example: to link to ”NIST 800 63b section 5.1.9.1” we take the general link to the NIST 800 63b page https://pages.nist.gov/800-63-3/sp800-63b.html and add the id of the specific section we want to go to https://pages.nist.gov/800-63-3/sp800-63b.html# After much searching, we concluded that it is not possible to link to a specific section in a PDF document. While there exist some features which allow users to jump to a specific bookmark or search term through the URL, these only seem to work reliably on some browsers and are thus not a viable solution. The closest solution we could find was the option to link to a certain page in the pdf. This is done by adding #page=10 to jump to page 10 upon opening the link. This feature seemed to work on all browsers. For example, https://owasp.org/www-pdf-archive/OWASP_Application_Security_Verification_Standard_4.0-en.pdf#page=10 brings you to page 10 of the ASVS. All in all, this is not very useful for OpenCRE and this remains a problem. Markdown is a lightweight markup language which can add formatting to plaintext. It is fairly easy to use, however, deep linking is impossible unless the author makes specific links in the markdown code. Take for example https://github.com/OWASP/ASVS/blob/v4.0.2/4.0/en/0x10-V1-Architecture.md#v12-authentication-architectural-requirements. This link goes to chapter 1.2 of the ASVS but this only works because there is a pre-existing link made by the authors. Another way Markdown documents are displayed is through gitbooks, this is an even more stylised version of a markdown document, however, the same restrictions apply. There is no way to refer to a more specific spot if the writer has not linked to it as there is no 'id' or similar point of reference to link to as with HTML. Chapter 6 Improved linking to CWE The Common Weakness Enumeration (CWE) is a large resource of great importance to OpenCRE as the requirements of OpenCRE need to protect against the weaknesses documented in CWE. Improving the linking to CWE is hard however as there are almost 1000 CWE entries of which only a part is relevant in the scope of CRE. To get a grasp of how to improve the linking between OpenCRE and CWE we analysed the OpenCRE coverage in section 6.1, we researched the coverage and structure of CWE in section 6.2 and looked into the discrepancies found in the mapping between OpenCRE, CWE and OWASP ZAP in section 6.3. Finally, we present our conclusion on this subject in section 6.4. 6.1 Charting CRE To get a better understanding of the content-wise coverage of OpenCRE we performed an analysis of the top-level topics. What we found is that the CRE covers 3 main categories: 1. Web application security. ranging from detecting whether communications are automated or not, to authentication, to crypto. This category is well established, covering a wide range of logically connected topics. Figure 6.1 shows a visualisation of the coverage of OpenCRE regarding Web application security. 2. Organisational. Requirements revolving around for example hiring personnel and risk assessments. This has 4 top-level CREs but can be expanded to include more topics such as physical security and others mentioned in ISO 27k and NIST. 3. Deployment. One top-level CRE covers deployment and operations. This CRE covers aspects of the SDLC. Most of the top-level topics cover web application security and the deployment and organisational topics are entirely based on NIST standards. 6.2 Charting CWE Not all CWEs are relevant. However, figuring out which are relevant is not a straightforward task. Hardware CWEs are obviously out of the scope of the CRE and variant CWEs might be too specific to include and would not contribute much. For example, CWE 41 "improper resolution of path equivalence" could be relevant but the variant children such as CWE 42 "Path Equivalence: 'filename.' (Trailing Dot)" would not add much, especially if its parent is already included. MITRE offers several CWE 'views' which might be the key to this problem. 2 views stand out as they categorise and reduce the amount of CWEs to a more usable selection. The simplified view (CWE 1003) is a selection of 127 weaknesses with some categorisation, which had some potential. The focus of this view is on open source weaknesses however and thus is too limited for OpenCRE. More notably the software development view (CWE 699) is likely most relevant as this is a set which has already been filtered and sorted into categories relevant to software development. This set has 419 entries spread over 40 categories, making it a lot more manageable than the complete set. 6.3 ZAP-CWE-CRE discrepancies OWASP ZAP is a new resource which is integrated into OpenCRE during the span of this thesis. ZAP is an automated code analysis tool which checks the code against predefined rules. The ZAP rules are now linked to a relevant OpenCRE topic but also have a link to a CWE which covers the weakness detected. The linking between OWASP ZAP, OpenCRE and CWE is not perfect, however. There is a set of CWEs which are linked to a ZAP rule but are not in OpenCRE. Figure 6.2 elaborates the relations between the 3 resources. ![Diagram](image.png) Figure 6.2: Relations between CWE, CRE and ZAP We were provided with a list of CWEs which were connected to OWASP ZAP rules but not in the CRE. This list was compiled before the start of this thesis as part of an effort to verify the existing mappings of CWE and CRE. The data consists of CWEs, not in openCRE which are connected to ZAP rules which ARE in OpenCRE. We manually cross-referenced all the CWEs and ZAP rules to make sense of the what is going on and found that these zap rule-CWE couples fall into 7 groups based on the similarity of their subject which can be further generalised into 2 categories: 1. Specific ZAP rule with just as specific CWE. These CWEs and ZAP alerts can be linked to a CRE topic which covers the broad categories. The reason these were not already in the CRE is likely because they can be considered variants of a more specific weakness which is in OpenCRE. Adding these will not add any value to OpenCRE as they are too specific to make for requirements. 2. Specific ZAP rule with very broad CWE. This category is ZAP rules which do not have a proper matching CWE entry. The problem here lies mostly with MITRE as these are weaknesses which are simply not covered in the CWE. The broad CWEs are not useful to include in the CRE as they provide very little information, possibly a child CWE of the broad entry can be added to provide more coverage but this should be assessed case by case. The ZAP rules can be added to an appropriate CRE. For instance "ZAP Rule: "Source Code Disclosure - Git" is about being able to access source code without proper authorisation and can be added to a relevant CRE ("data access control"). The raw data can be found in Appendix A.2 6.4 How can the CWE-CRE linking be improved? Linking CWE and CRE is hard as it’s like comparing apples and oranges. While there is much overlap, fitting them together is a complex puzzle which ultimately boils down to manually going through CWEs to figure out whether there are requirements connected to them in the CRE or if new requirements should be added. The sheer number of CWEs makes it impossible to add everything and most of the 'views' provided by MITRE are not very useful. The best approach for improving CWE linking in the CRE is through the CWE view 699. This provides 40 software development categories which can serve as a basis for finding gaps in the CRE and creating new mappings from CRE topics to CWEs. When comparing CRE and CWE the most notable gap is the lack of best practices, code quality and documentation entries in the OpenCRE. CWE view 699 offers several categories which cover these topics and can thus be used as a basis for adding or expanding OpenCRE. These categories should be carefully assessed by an expert to make suitable requirements for these uncovered subjects. \[1\text{https://cwe.mitre.org/data/definitions/699.html} \] Chapter 7 Future work 1. In chapter 4 we identified several changes which can greatly improve the usability of OpenCRE. The implementation however falls outside of the scope of this project. 2. In section 6.4 concluded that there is room for improvement in the CRE-CWE linking in the area of code quality, documentation and best practices. However, we did not have the time to make concrete recommendations for linking these topics or categories nor did we have the expertise to create new topics. 3. In section 6.1 we analysed the content-wise coverage of OpenCRE and found that the majority of the topics cover application security. The deployment and organisational security categories were limited in comparison and only linked to NIST. There is much room for expansion in these areas. Chapter 8 Conclusions In this thesis, we worked to identify ways to improve OpenCRE. To do this we performed assessments of its current functionality. The first step was a general assessment from a newcomer’s point of view. The second step of the assessment was through interviews with multiple stakeholders of the project, including developers and security experts. We conclude that OpenCRE in the way it looks during its current beta phase can be confusing even to someone with a broad understanding of the security field. The main issues regarding user experience were: 1. The reverse tree structure shown in figure 4.1 is counterintuitive from a user’s perspective, causing unnecessary pause on an overview which is designed to provide quick and easy access to numerous sources. 2. There is a large amount of information shown. There was a lot of text used to explain relations and codes used which are mostly meaningless to users. 3. Not all relations on the topic pages are clear. These should be intuitive and recognisable to ensure users can easily navigate to where they want to go. 4. Lack of explanation of topics and standards. Topic names can be cryptic for example, “Encode user input before logging”. Having a line of explanation or a tooltip which explains this would enhance the user experience. 5. Poor page usage. Everything is ordered vertically making OpenCRE pages longer than needed while leaving the right side empty. We made several suggestions to improve these issues in section 4.4, most notably: 1. Remove unnecessary codes and other jargon to reduce short-term memory load. 2. Restructure or break up the tree hierarchy to make the layout more intuitive. 3. Adding explanations of topics in the form of hover-over tooltips to enable more informative feedback. 4. A topic browsing sidebar to enable easy reversal of action and shortcuts. 5. Changing the way topics refer to each other to be more natural and informative. In chapter 5 we discussed ways to deep link HTML, PDF and Markdown documents in OpenCRE. Finally, in section 6 we explored the gap between CWE and CRE and made recommendations on how to improve the linking of CWE in OpenCRE. Bibliography Appendix A Appendix This is the rough layout of the questions posed during the semi-structured interviews. A.1 Interview 1. Work history (a) Education (b) past functions (c) current function 2. Standards and security? (a) Experience with security (b) Experience with security standards. 3. OpenCRE (a) What does the interviewee know about openCRE? (b) Case with CRE: wiki pagina die naar cre linked (c) What does the interviewee miss contentwise in openCRE? For the various usecases that might be relevant to the interviewee. (d) What does the interviewee miss UX in openCRE? 4. Design changes (a) Show mockups and ask which would be preferable? A.2 ZAP-CWE-CRE analysis raw data Output of a script which analysed which CWEs are not in CRE but are referenced in ZAP rules. The colours indicate similarities in subject. opencre.org does not know of CWE 113, it is linked to by zap alert: ZAP Rule: "CRLF Injection" opencre.org does not know of CWE 472, it is linked to by zap alert: ZAP Rule: "Parameter Tampering" opencre.org does not know of CWE 776, it is linked to by zap alert: ZAP Rule: "Exponential Entity Expansion (Billion Laughs Attack)" opencre.org does not know of CWE 91, it is linked to by zap alert: ZAP Rule: "XSLT Injection" opencre.org does not know of CWE 917, it is linked to by zap alert: ZAP Rule: "Expression Language Injection" opencre.org does not know of CWE 943, it is linked to by zap alert: ZAP Rule: "NoSQL Injection - MongoDB" opencre.org does not know of CWE 97, it is linked to by zap alert: ZAP Rule: "Server Side Include" opencre.org does not know of CWE 119, it is linked to by zap alert: ZAP Rule: "Heartbleed OpenSSL" opencre.org does not know of CWE 1275, it is linked to by zap alert: ZAP Rule: "Cookie without SameSite Attribute" opencre.org does not know of CWE 215, it is linked to by zap alert: ZAP Rule: "Cross-Domain Misc Security" opencre.org does not know of CWE 264, it is linked to by zap alert: ZAP Rule: "Cross-Domain Misc Security" opencre.org does not know of CWE 530, it is linked to by zap alert: ZAP Rule: "Backup File Disclosure" opencre.org does not know of CWE 538, it is linked to by zap alert: ZAP Rule: "Hidden File Finder" opencre.org does not know of CWE 541, it is linked to by zap alert: ZAP Rule: "Source Code Disclosure" opencre.org does not know of CWE 542, it is linked to by zap alert: ZAP Rule: "Source Code Disclosure" opencre.org does not know of CWE 543, it is linked to by zap alert: ZAP Rule: "Source Code Disclosure" opencre.org does not know of CWE 565, it is linked to by zap alert: ZAP Rule: "Loosely Scoped Cookie" opencre.org does not know of CWE 642, it is linked to by zap alert: ZAP Rule: "Emails Found in the Viewstate" opencre.org does not know of CWE 642, it is linked to by zap alert: ZAP Rule: "Insecure JSF ViewState" opencre.org does not know of CWE 642, it is linked to by zap alert: ZAP Rule: "Old Asp.Net Version" opencre.org does not know of CWE 642, it is linked to by zap alert: ZAP Rule: "Potential IP Addresses Found" opencre.org does not know of CWE 642, it is linked to by zap alert: ZAP Rule: "Split Viewstate in Use" opencre.org does not know of CWE 642, it is linked to by zap alert: ZAP Rule: "Viewstate without MAC Signature" opencre.org does not know of CWE 693, it is linked to by zap alert: ZAP Rule: "CSP" opencre.org does not know of CWE 693, it is linked to by zap alert: ZAP Rule: "Insufficient Site Isolation" opencre.org does not know of CWE 693, it is linked to by zap alert: ZAP Rule: "Insufficient Site Isolation" opencre.org does not know of CWE 693, it is linked to by zap alert: ZAP Rule: "Insufficient Site Isolation" opencre.org does not know of CWE 933, it is linked to by zap alert: ZAP Rule: "X-Asp-Http-Version Re header"
{"Source-Url": "https://www.cs.ru.nl/bachelors-theses/2022/Thomas_Klein_Breteler___4068246___Improving_OpenCRE.pdf", "len_cl100k_base": 11341, "olmocr-version": "0.1.53", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 71345, "total-output-tokens": 13945, "length": "2e13", "weborganizer": {"__label__adult": 0.0002777576446533203, "__label__art_design": 0.0003800392150878906, "__label__crime_law": 0.0003764629364013672, "__label__education_jobs": 0.0023250579833984375, "__label__entertainment": 5.120038986206055e-05, "__label__fashion_beauty": 0.00013244152069091797, "__label__finance_business": 0.00031065940856933594, "__label__food_dining": 0.0001962184906005859, "__label__games": 0.0004069805145263672, "__label__hardware": 0.00047206878662109375, "__label__health": 0.00018227100372314453, "__label__history": 0.0002008676528930664, "__label__home_hobbies": 7.402896881103516e-05, "__label__industrial": 0.00020599365234375, "__label__literature": 0.00022029876708984375, "__label__politics": 0.00021779537200927737, "__label__religion": 0.0002378225326538086, "__label__science_tech": 0.0064697265625, "__label__social_life": 0.00011432170867919922, "__label__software": 0.011138916015625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.0001646280288696289, "__label__transportation": 0.00023925304412841797, "__label__travel": 0.00013172626495361328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51909, 0.03805]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51909, 0.49822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51909, 0.93118]], "google_gemma-3-12b-it_contains_pii": [[0, 312, false], [312, 2109, null], [2109, 3848, null], [3848, 3939, null], [3939, 5477, null], [5477, 6669, null], [6669, 7133, null], [7133, 7730, null], [7730, 8603, null], [8603, 11055, null], [11055, 12368, null], [12368, 13094, null], [13094, 13661, null], [13661, 15451, null], [15451, 16566, null], [16566, 17021, null], [17021, 18459, null], [18459, 20084, null], [20084, 21474, null], [21474, 22695, null], [22695, 23361, null], [23361, 25795, null], [25795, 26541, null], [26541, 27496, null], [27496, 27965, null], [27965, 28152, null], [28152, 28429, null], [28429, 29581, null], [29581, 30399, null], [30399, 31817, null], [31817, 32373, null], [32373, 33306, null], [33306, 33899, null], [33899, 35640, null], [35640, 37167, null], [37167, 38379, null], [38379, 39714, null], [39714, 41594, null], [41594, 43480, null], [43480, 44274, null], [44274, 45806, null], [45806, 46461, null], [46461, 47948, null], [47948, 48120, null], [48120, 48803, null], [48803, 51909, null]], "google_gemma-3-12b-it_is_public_document": [[0, 312, true], [312, 2109, null], [2109, 3848, null], [3848, 3939, null], [3939, 5477, null], [5477, 6669, null], [6669, 7133, null], [7133, 7730, null], [7730, 8603, null], [8603, 11055, null], [11055, 12368, null], [12368, 13094, null], [13094, 13661, null], [13661, 15451, null], [15451, 16566, null], [16566, 17021, null], [17021, 18459, null], [18459, 20084, null], [20084, 21474, null], [21474, 22695, null], [22695, 23361, null], [23361, 25795, null], [25795, 26541, null], [26541, 27496, null], [27496, 27965, null], [27965, 28152, null], [28152, 28429, null], [28429, 29581, null], [29581, 30399, null], [30399, 31817, null], [31817, 32373, null], [32373, 33306, null], [33306, 33899, null], [33899, 35640, null], [35640, 37167, null], [37167, 38379, null], [38379, 39714, null], [39714, 41594, null], [41594, 43480, null], [43480, 44274, null], [44274, 45806, null], [45806, 46461, null], [46461, 47948, null], [47948, 48120, null], [48120, 48803, null], [48803, 51909, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51909, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51909, null]], "pdf_page_numbers": [[0, 312, 1], [312, 2109, 2], [2109, 3848, 3], [3848, 3939, 4], [3939, 5477, 5], [5477, 6669, 6], [6669, 7133, 7], [7133, 7730, 8], [7730, 8603, 9], [8603, 11055, 10], [11055, 12368, 11], [12368, 13094, 12], [13094, 13661, 13], [13661, 15451, 14], [15451, 16566, 15], [16566, 17021, 16], [17021, 18459, 17], [18459, 20084, 18], [20084, 21474, 19], [21474, 22695, 20], [22695, 23361, 21], [23361, 25795, 22], [25795, 26541, 23], [26541, 27496, 24], [27496, 27965, 25], [27965, 28152, 26], [28152, 28429, 27], [28429, 29581, 28], [29581, 30399, 29], [30399, 31817, 30], [31817, 32373, 31], [32373, 33306, 32], [33306, 33899, 33], [33899, 35640, 34], [35640, 37167, 35], [37167, 38379, 36], [38379, 39714, 37], [39714, 41594, 38], [41594, 43480, 39], [43480, 44274, 40], [44274, 45806, 41], [45806, 46461, 42], [46461, 47948, 43], [47948, 48120, 44], [48120, 48803, 45], [48803, 51909, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51909, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5269e8c96413cd67d3766fcc4ad272f1091c2950
Abstract The Dagstuhl Seminar “Approaches and Applications of Inductive Programming” (AAIP) has taken place for the sixth time. The Dagstuhl Seminar series brings together researchers concerned with learning programs from input/output examples from different areas, mostly from machine learning and other branches of artificial intelligence research, cognitive scientists interested in human learning in complex domains, and researchers with a background in formal methods and programming languages. Main topics addressed in the AAIP 2023 seminar have been neurosymbolic approaches to IP bringing together learning and reasoning, IP as a post-hoc approach to explaining decision-making of deep learning blackbox models, and exploring the potential of deep learning approaches, especially large language models such as OpenAI Codex for IP. Topics discussed in working groups were Large Language Models and inductive programming in cognitive architectures, avoiding too much search in inductive programming, finding suitable benchmark problems, and evaluation criteria for interpretability and explainability of inductive programming. Keywords and phrases explainable ai, human-like machine learning, inductive logic programming, interpretable machine learning, neuro-symbolic ai 1 Executive Summary Ute Schmid (Universität Bamberg, DE) Luc De Raedt (KU Leuven, BE) Inductive programming (IP) is a special perspective on program synthesis, addressing learning programs from incomplete specifications such as input/output examples. The seminar “Approaches and Applications of Inductive Programming” (AAIP) took place in Dagstuhl for the sixth time. This Dagstuhl Seminar brings together researchers from different areas of artificial intelligence research, machine learning, formal methods, programming languages, cognitive science, and human-computer-interaction interested in methods and applications of IP. Focus topics of AAIP ’23 have been neurosymbolic approaches to IP bringing together learning and reasoning, IP as a post-hoc approach to explaining decision-making of deep learning blackbox models, and exploring the potential of deep learning approaches, especially large language models such as OpenAI Codex for IP. The focus topics have been introduced and discussed in a series of talks addressing neuro-symbolic IP, IP for learning in planning, explainable AI and IP, and IP and generative AI. Furthermore, a series of talks were dedicated to the relation of cognitive science to IP: Human-like few-shot learning via Bayesian reasoning over natural language, the child as hacker, using program synthesis to model strategy diversity in human visual reasoning, a neurodiversity-inspired solver for the Abstraction and Reasoning Corpus (ARC) using visual imagery and program synthesis, and using natural language for self-programming in cognitive architectures. The relation between IP and explainability has been highlighted with talks about explainable models via compression of relational ensembles, and effects of explaining machine-learned logic programs for human comprehension and discovery. Relations between IP and knowledge based methods have been addressed in a talk about learning disjointness axioms for knowledge graph refinement and for making knowledge graph embedding methods more robust. Methods of IP as an approach to learning interpretable rules have been presented with a focus on inductive logic programming (ILP), deep-rule learning, relational program synthesis with numerical reasoning, improving rule classifiers learned from quantitative data by recovering information lost by discretisation, meta-interpretive learning for generalised planning, probabilistic inductive logic programming, abstraction for answer set programs, anti-unification and generalization, programmatic reinforcement learning, and making program synthesis fast on a GPU. These talks have been complemented by several system demos presenting the ILP systems Popper and Louise, an RDF rules learner, and learning rules to sort e-mails into folders (EmFORE). We identified four relevant research problems for current and future research in IP which were addressed in in-depth discussions in working groups and afterwards discussed in plenary sessions: (1) Large Language Models and Inductive Programming in Cognitive Architectures: one main outcome has been that combining learning and reasoning by integrating LLMs and reasoners in a cognitive architecture could be an enabler for validating programs that get executed by the overall architecture and to possible get nearer to human performance. (2) Avoiding too much search in Inductive Programming: It was noted that for IP in general we do need to learn structure as well as probabilities. Classic IP approaches focus on structure learning and – in contrast to neural network architectures – can learn recursion explicitly. The main result has been that suitable problem domains should be identified for systematic evaluation, such as string transformation which combine syntactic (e.g. return first letter) and semantic (e.g. give the capital of a country) transformations. (3) Finding Suitable Benchmark Problems for Inductive Programming: Here, the discussion from the second topic has been extended and systematised with the formulation of several relevant criteria for benchmark problems to evaluate IP approaches, among them problem domains which are not solvable by LLMs and solvable efficiently by humans. (4) Evaluation Criteria for Interpretability and Explainability of Inductive Programming: The main insight has been that the degree of interpretability and the quality of explanations is strongly context-dependent, being influenced by the recipient (who), the content (what), the information need and reason for an explanation (why), and the form of the explanation (how). Different candidates for metrics were identified, such as complexity measures, semantic coherence, and reliability of generated code. In a final discussion round, several outcomes have been summarized and action points have been discussed. A crucial problem which might impact scientific progress as well as visibility could be that there is no core general approach to IP (such as gradient descent for neural networks). Relevant use cases might not have a focus on learning recursion/loops but on relations (e.g. in medicine and biology). The focus on learning programs (including recursion) might profit from using Python as the target language instead of more specific languages such as Prolog. Furthermore, current IP systems are mostly not easy to find and to use. Providing a toolbox which can be easily used (such as Weka for standard ML) might be helpful. There was a general agreement among the participants that the format of Dagstuhl Seminars is especially fruitful for bringing together the different perspectives on IP from machine learning, cognitive science, and program language research. ## Table of Contents ### Executive Summary *Ute Schmid and Luc De Raedt* ................................................................. 182 ### Overview of Talks - Effects of explaining machine-learned logic programs for human comprehension and discovery *Lun Ai* ........................................................................................................... 187 - Making program synthesis fast on a GPU *Martin Berger* ........................................................................................... 188 - Anti-unification and Generalization: What’s next? *David Cerna* ............................................................................................... 189 - On the Need of Learning Disjointness Axioms for Knowledge Graph Refinement and for Making Knowledge Graph Embedding Methods more Robust *Claudia d’Amato* ......................................................................................... 189 - How to make logics neurosymbolic *Luc De Raedt* ............................................................................................. 190 - What should we do next in ILP? *Sebastijan Dumančić* .................................................................................. 191 - Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language *Kevin Ellis* .................................................................................................... 191 - Towards Programmatic Reinforcement Learning *Nathanaël Fijalkow* ...................................................................................... 192 - Inductive Programming for Explainable Artificial Intelligence (IP for XAI) *Bettina Finzel* .............................................................................................. 192 - On Deep Rule Learning *Johannes Fürnkranz* ..................................................................................... 193 - Three Learning Problems in Planning *Hector Geffner* ........................................................................................... 194 - A tutorial on Popper *Céline Hocquette* ....................................................................................... 194 - Relational program synthesis with numerical reasoning *Céline Hocquette* ....................................................................................... 195 - On the role of natural language for self-programming in cognitive architectures *Frank Jäkel* ................................................................................................... 196 - QCBA: improving rule classifiers learned from quantitative data by recovering information lost by discretisation *Tomáš Klášr* ............................................................................................... 196 - RDFrules: A Swiss knife for relational association rule learning, classification and knowledge graph completion *Tomáš Klášr* ............................................................................................... 197 The Child as Hacker Josh Rule ......................................................... 198 Abstraction for Answer Set Programs Zeynep G. Sarıbatur ........................................... 199 Explanatory Inductive Programming (XAI for IP) Ute Schmid ...................................................... 200 Explainable models via compression of tree ensembles Sriraam Natarajan ............................................ 201 Inductive Programming meets Large Language Models Gust Verbruggen ................................................. 202 Inductive Programming meets Real User Problems Gust Verbruggen ............................................... 202 Probabilistic Logic Programming: Quo Vadis? Felix Weitkämper .............................................. 203 Working groups Large Language Models and Inductive Programming in Cognitive Architectures Bettina Finzel and Frank Jäkel .............................. 204 Avoiding too much search in Inductive Programming Ute Schmid, David Cerna, and Hector Geffner ........... 204 Evaluation Criteria for Interpretability and Explainability of Inductive Programming Ute Schmid, Lun Ai, Claudia d’Amato, and Johannes Fürnkranz .............. 205 Finding Suitable Benchmark Problems for Inductive Programming Ute Schmid, Martin Berger, Sebastijan Dumancic, Nathanaël Fijalkow, and Gust Verbruggen .......... 207 Panel discussions Inductive Programming – How to Go On? Ute Schmid, Claudia d’Amato, Hector Geffner, Sriraam Natarajan, and Josh Rule 209 Participants ...................................................... 211 3 Overview of Talks 3.1 Effects of explaining machine-learned logic programs for human comprehension and discovery Lun Ai (Imperial College London, GB) The talk focused on the assumption in the Logic Programming community: logic programs are human-comprehensible. This had resulted in very few empirical assessments on the effects of explaining machine-learned logic programs. Empirical results by the authors showed explaining logic programs do not always lead to improved human performance. In addition, the authors stressed the need for objective and operational measurements of explainability. Their results provided novel insights on the explanatory effects of curriculum order and the presence of machine-learned explanations for sequential problem-solving. The topic of comprehensibility of machine-learned theories has recently drawn increasing attention. Inductive logic programming uses logic programming to derive logic theories from small data based on abduction and induction techniques. Learned theories are represented in the form of rules as declarative descriptions of obtained knowledge. In earlier work, the authors provided the first evidence of a measurable increase in human comprehension based on machine-learned logic rules for simple classification tasks. In a later study, it was found that the presentation of machine-learned explanations to humans can produce both beneficial and harmful effects in the context of game learning. The talk concentrated on a most recent investigation on the effects of the ordering of concept presentations and logic program explanations. The authors proposed a framework for the effects of sequential teaching based on an existing definition of comprehensibility. This empirical study involved curricula that teach novices the merge sort algorithm. They provided performance-based and trace-based evidence for support. Results show that sequential teaching of concepts with increasing complexity (a) has a beneficial effect on human comprehension and (b) leads to human re-discovery of divide-and-conquer problem-solving strategies, and (c) allows adaptations of human problem-solving strategy with better performance when machine-learned explanations are also presented. Several open questions were discussed during and after the talk. For instance, the audience suggested an investigation on “learning how to learn” and comparisons between the human traces and the machine learner (ILP) trace. In the context of increasing the popularity of logic programs, some challenges in higher-education curricula were discussed showing the significance of how to best design Logic Programming teaching interactions. Importantly, this talk highlighted the limitations to performance-based evaluations. This led to an extended discussion on computable and objective assessments for various perspectives of explainability. 3.2 Making program synthesis fast on a GPU Martin Berger (University of Sussex – Brighton, GB) License © Creative Commons BY 4.0 International license © Martin Berger Joint work of Mojtaba Valizadeh, Martin Berger URL https://doi.org/10.1145/3591274 Inductive programming is stuck! GPUs are the work-horses of computing. Applications that fit the GPU style of programming typically run orders of magnitude faster on GPUs than on CPUs. This gives opportunities for scaling not achievable with CPUs. The recent success of deep learning amply demonstrates this. Unfortunately, large classes of applications are not known to benefit from GPU acceleration. That includes most tools in program synthesis, inductive programming, theorem proving, ... (from now on: automated reasoning) such as SAT and SMT solvers. How can we change this? Simplifying a bit, a GPU can only accelerate applications if they are “GPU-friendly”, meaning they are - highly parallel, - have little to no data-dependent branching, and have - predictable data-movement, and high temporal and spatial data locality. Algorithms in automated reasoning, as implemented today, mostly lack those properties. Many are extremely branching heavy, for example because they branch on syntactic structure. Some are seemingly sequential (e.g. unit propagation, a core step modern SAT solvers for simplifying formulae). This might be because an algorithmic problem is intrinsically sequential, or because a way of making an algorithmic problem GPU-friendly has not yet been found. Research question: Can we identify workloads arising in industrial automatic reasoning practise, and scale them up on GPUs by developing suitable, GPU-friendly algorithms? The GPU-based algorithms should give at least 100x speedup (for comparable problem instances), and be able to handle at least 1000x bigger problem instances, both in comparison with state-of-the-art open (= non-proprietary) software for the same problem domain. Preliminary answer, based on [1]: all program synthesis that uses the generate-and-test approach can see orders of magnitude speedup on GPUs. Recommendation to the ILP community: stop what your are doing and implement your ideas on a GPU. References 3.3 Anti-unification and Generalization: What’s next? David Cerna (The Czech Academy of Sciences – Prague, CZ) Anti-unification (AU) is a fundamental operation for the computation of symbolic generalizations useful for inductive inferencing [1]. It is the dual operation to unification, an operation at the foundation of automated theorem proving. In contrast to unification, where one is interested in constructing most general unifiers (mgus), anti-unification is concerned with the construction of least general generalizations (lggs); that is, expressions capturing the commonalities shared between members of a set of symbolic expressions. The operation was introduced by Plotkin and Reynolds and found many applications within the area of Inductive synthesis and, in particular, early inductive logic programming (ILP) systems. However, since their seminal work, the number of applications has grown tremendously with uses in program analysis, program repair, automated reasoning, and beyond. With the growing number of applications, several investigations have developed anti-unification methods over various symbolic objects, such as the simply-typed lambda calculus, term graphs, and hedge expression, to name a few. In particular, there has been significant progress in understanding equational anti-unification and the cardinality of the set of solutions (set of lggs). In many cases, the solution sets are either infinitely large or do not exist (every generalization allows a more specific generalization). We ask, is least general generalization the right characterization of a solution to an anti-unification problem? In particular, is there a characterization of a solution more amenable to modern approaches to inductive synthesis? Secondly, what does the inductive synthesis community need from symbolic generalization techniques, which is currently missing? References 3.4 On the Need of Learning Disjointness Axioms for Knowledge Graph Refinement and for Making Knowledge Graph Embedding Methods more Robust Claudia d’Amato (University of Bari, IT) Knowledge Graphs (KGs) are multi-relational graphs designed to organize and share real-world knowledge where nodes represent entities of interest and edges represent different types of relationships between such entities [1]. Despite the large usage, it is well known that KGs suffer from incompleteness and noise. For tackling these problems, solutions to the link prediction task, that amount at predicting an unknown component of a triple, have been investigated. Mostly, Knowledge Graph Embedding methods (KGE) have been devised since they have been shown to scale even to very large KGs. KGE convert the data graph into an optimal low dimensional space where structural graph information is preserved as much as possible. Embeddings are learned based on the constraint that a valid (positive) triple score has to be lower than the invalid (negative) triple score. As KGs mainly encode positive triples, negative triples are obtained by randomly corrupting true/observed triples [2], thus possibly injecting false negatives during the learning process. In this talk we present a solution for an informed generation of negative examples that, by exploiting the semantics of the KGs and reasoning capabilities, is able to limit false negatives. A key element is represented by disjointness axioms, that are essential for making explicit the negative knowledge about a domain. Yet, disjointness axioms are often overlooked during the modeling process [3]. For the purpose, a symbolic method for discovering disjointness axioms from the data distribution is illustrated. Moving from the assumption that two or more concepts may be mutually disjoint when the sets of their (known) instances do not overlap, the problem is cast as a conceptual clustering problem, where the goal is both to find the best possible partitioning of the individuals in (a subset of) the KG and also to induce intensional definitions of the corresponding classes expressed in the standard representation languages. The talk will conclude with the analysis of some open challenges related to the presented solutions. References 3.5 How to make logics neurosymbolic Luc De Raedt (KU Leuven, BE) Neurosymbolic AI (NeSy) is regarded as the third wave in AI. It aims at combining knowledge representation and reasoning with neural networks. Numerous approaches to NeSy are being developed and there exists an ‘alphabet-soup’ of different systems, whose relationships are often unclear. I will discuss the state-of-the art in NeSy and argue that there are many similarities with statistical relational AI (StarAI). Taking inspiration from StarAI, and exploiting these similarities, I will argue that Neurosymbolic AI = Logic + Probability + Neural Networks. I will also provide a recipe for developing NeSy approaches: start from a logic, add a probabilistic interpretation, and then turn neural networks into “neural predicates”. Probability is interpreted broadly here, and is necessary to provide a quantitative and differentiable component to the logic. At the semantic and the computation level, one can then combine logical circuits (ako proof structures) labeled with probability, and neural networks in computation graphs. I will illustrate the recipe with NeSy systems such as DeepProbLog, a deep probabilistic extension of Prolog, and DeepStochLog, a neural network extension of stochastic definite clause grammars (or stochastic logic programs). 3.6 What should we do next in ILP? Sebastija Dumančić (TU Delft, NL) License @ Creative Commons BY 4.0 International license © Sebastijan Dumančić This talks consists of two parts. In the first part, I provide a brief introduction to Inductive Logic Programming: what is it, why is it interesting, and what interesting has recently happened. In the second part, I will explore what I think we should do next in ILP and program synthesis to further advance the field, all centered around the idea of avoiding search. 3.7 Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language Kevin Ellis (Cornell University – Ithaca, US) License @ Creative Commons BY 4.0 International license © Kevin Ellis Main reference URL https://doi.org//10.48550/ARXIV.2306.02797 A core tension in models of concept learning is that the model must carefully balance the tractability of inference against the expressivity of the hypothesis class. Humans, however, can efficiently learn a broad range of concepts. We introduce a model of inductive learning that seeks to be human-like in that sense. It implements a Bayesian reasoning process where a language model first proposes candidate hypotheses expressed in natural language, which are then re-weighed by a prior and a likelihood. By estimating the prior from human data, we can predict human judgments on learning problems involving numbers and sets, spanning concepts that are generative, discriminative, propositional, and higher-order. 3.8 Towards Programmatic Reinforcement Learning Nathanaël Fijalkow (CNRS – Talence, FR) License © Creative Commons BY 4.0 International license © Nathanaël Fijalkow This short talk was a pitch for a new problem, called Programmatic Reinforcement Learning: assuming that the environment is given as a program, the goal is to construct an optimal policy in the form of a program. Some motivations, basic examples, and preliminary experimental results were presented and discussed. 3.9 Inductive Programming for Explainable Artificial Intelligence (IP for XAI) Bettina Finzel (Universität Bamberg, DE) License © Creative Commons BY 4.0 International license © Bettina Finzel Methods of explainable artificial intelligence (XAI) and of inductive programming (IP) can profit from each other in two ways: (1) Inductive programming results in symbolic models (programs) which are inherently interpretable. These programs can provide expressive, relational explanations for learned black box models, for instance Convolutional Neural Networks for image classification. This perspective (IP for XAI) is addressed in this summary. (2) On the other hand, there might be a need for explainability of IP programs to humans. This perspective (XAI for IP) is addressed in the contribution of U. Schmid in this report. End-to-end and data-driven approaches to learning, like deep convolutional neural networks in image classification, have become prevalent and the center of attention in many research and application areas. However, some research objectives and real world problems may not be solvable by just processing large amounts of data. In some cases, like medical diagnostics, “big data” simply may not be available [2]. At the same time, deep learning models are not inherently transparent opposed to those generated by interpretable machine learning algorithms, such as Inductive Logic Programming (ILP) [6]. This may be a crucial deficiency and a barrier to high stakes applicability of deep learning. At the same time, ILP frameworks provide symbolic representations in the form of predicates in First-Order-Logic, tracing capabilities and the integration of relational background knowledge by design, e.g., from human expertise and domain knowledge [3]. Moreover, their learning process is data-efficient in comparison to deep learning. In addition, being a relational learning approach qualifies ILP for explainability [1], e.g., in complex knowledge domains like medicine [2] and AI evaluation in general [5]. Deep learning may therefore profit from being combined with ILP for explanation, validation and a bi-directional interaction between a human and an AI system [3]. A crucial part of this avenue is the design of interfaces between internal representations of what a deep learning model has learned and the relational background knowledge of IP systems, like ILP, to provide human-understandable surrogate models, explanations and interactions. First attempts to bridge this gap have already been proposed [4]. However, several open questions remain to date: How can we find and extract relevant internal representations from deep learning models and present them in a human-understandable manner? How can we disambiguate representations? Which relations should be included in the IP module and satisfied by the deep learning model? How can we implement a knowledge exchange between IP and deep learning models to support the interplay of learning and reasoning in knowledge discovery and AI evaluation? In my opinion, to build such systems is the way toward approximating the strengths of the human inductive bias and adaptability of AI systems to the real world. References 3.10 On Deep Rule Learning *Johannes Fürnkranz (Johannes Kepler Universität Linz, AT)* **License** Creative Commons BY 4.0 International license © Johannes Fürnkranz **Joint work of** Florian Beck, Johannes Fürnkranz **URL** https://doi.org/10.3389/frai.2021.689398 Rule learning algorithms form the basis of classic inductive logic programming algorithms such as FOIL or PROGOL. Studying them in a propositional logic setting allows to focus on the algorithmic aspects. A key limitation of the current state-of-the-art such as the LORD algorithm recently developed in our group [1], is that they are all limited to learning rule sets that directly connect the input features to the target feature. In a logical setting, this corresponds to learning a DNF expression. While every logical function can be expressed as a DNF formula, we argue in this talk that learning deeply structured theories may be beneficial, by drawing an analogy to (deep) neural networks [3], and recapitulating some recent empirical results [2]. **References** 3.11 Three Learning Problems in Planning Hector Geffner (RWTH Aachen, DE) I’ll talk about three learning problems in planning: learning lifted action models, learning generalized policies, and learning general problem decomposition or sketches. We have been approaching these problems in a top-down fashion, making a clear distinction between what is to be learned and how it is to be learned. Indeed, we have been pursuing two types of approaches in parallel: formulations that rely on combinatorial optimization solvers on the one hand, and deep (reinforcement) learning approaches on the other. I’ll also discuss the relation between the two approaches which in the common form are limited by the expressive power of C2 logic: first-order logic with two variables and counting, and challenges to get beyond C2. References 3.12 A tutorial on Popper Céline Hocquette (University of Oxford, GB) Inductive logic programming (ILP) is a form of program synthesis. The goal is to induce a logic program that generalises training examples. Popper is a recent ILP system which frames the ILP problem as a constraint satisfaction problem [1, 2]. Popper continually generates hypotheses and tests them on the training examples. If a hypothesis is not a solution, Popper builds constraints to prune hypotheses which are also provably no solutions. Popper supports learning of recursive programs, predicate invention and learning moderately large programs. We present a recent extension of Popper which supports learning minimal description length programs from noisy data [3]. Our approach leverages recent progress in MaxSAT solvers to efficiently find an optimal program. References 2 Andrew Cropper, Céline Hocquette: Learning Logic Programs by Combining Programs. ECAI 2023: 501-508 https://doi.org/10.3233/FAIA230309 3.13 Relational program synthesis with numerical reasoning Céline Hocquette (University of Oxford, GB) Learning programs with numerical values is fundamental to many AI applications, including bio-informatics and drug design. However, current program synthesis approaches struggle to learn programs with numerical values. Program synthesis approaches based on enumeration of candidate numerical symbols cannot handle infinite domains. Recent program synthesis approaches also have difficulties reasoning from multiple examples, which is required for instance to identify numerical thresholds or intervals. To overcome these limitations, we introduce an inductive logic programming approach which combines relational learning with numerical reasoning [1]. Our approach uses satisfiability modulo theories solvers to efficiently identify numerical values. Our approach can identify numerical values in linear arithmetic fragments, such as real difference logic, and from infinite domains, such as real numbers or integers. Our results show our approach can outperform existing program synthesis approaches. However, our approach has limited scalability with respect to the complexity of the numerical reasoning stage. 3.14 On the role of natural language for self-programming in cognitive architectures Frank Jäkel (TU Darmstadt, DE) License Creative Commons BY 4.0 International license Human problem solvers are able to adapt their problem solving strategies to new situations. They program their own behavior. In order to do so, they introspect, test, debug, and optimize their problem solving algorithms. These metacognitive activities can be implemented in standard cognitive architectures that can store code in working memory and execute it with an interpreter that is implemented as a set of rules in a production system. Additional rules can then modify the code at runtime. Unfortunately, the programming language in which such mental code is written has remained elusive. Here, I will argue that it is time to revive the old idea that program code is directly given in natural language. Traditionally, research on cognitive architectures has mostly avoided natural language even though language is obviously an important aspect of human cognition. With the advent of large language models it seems more plausible than ever that natural language interpreters might become an essential part of a new generation of cognitive architectures. In particular, the metacognitive activity of modifying your own programs might simply consist of transforming one natural language expression into another – the task that transformers were developed for and have turned out to be quite successful at. 3.15 QCBA: improving rule classifiers learned from quantitative data by recovering information lost by discretisation Tomáš Kliegr (University of Economics – Prague, CZ) License Creative Commons BY 4.0 International license Many rule-learning algorithms require prior discretization before they can effectively process datasets with numerical data. For example, consider a dataset with attributes such as temperature and humidity. Discretization (also called quantization) means binning their values into intervals. A simple equidistant algorithm would produce intervals such as (0;10], (10;20], and (20;30]. If we consider rule learning algorithms based on association rule learning, such as Classification based on Associations [3], discretization is necessary to ensure fast pruning of the state space and also learning of sufficiently generalized rules. Only after the discretization is it possible to learn the rules of the type IF temperature=(20;30] and humidity=(50;60] THEN worker_comfort= good. While some rule learning algorithms can directly work with numerical attributes, such as the recently proposed extension of the POPPER ILP system [4], for those based on association rule learning, integrating quantization may not be efficient as it could excessively slow down the candidate generation phase. A common approach is thus to apply prediscretization, e.g., following the Minimum Description Length Principle (MDLP)-based method proposed by [2]. However, as the determination of interval lengths is done globally (i.e., same intervals for all instances) and outside of the learning algorithm (e.g. CBA), information is lost, resulting in efficiencies in the final classifier. Following this problem, this talk introduced the Quantitative CBA (QCBA) algorithm for the subsequent processing of rule models learned on arbitrarily pre-discretized data (e.g., with equidistant binning, MDLP or other method). Extensive experiments have shown that the proposed algorithm consistently reduces the models’ size and thus makes them more understandable. Additionally, in many cases, the predictive performance is also improved. The algorithm can be used to process the results of many rule learning algorithms, including CBA, Interpretable Decision Sets [1] and Scalable Bayesian Rule Lists [5]. The results are available in the R package qCBA available in CRAN. The method is described in detail in [6]. References 3.16 RDFRules: A Swiss knife for relational association rule learning, classification and knowledge graph completion Tomáš Kliegr (University of Economics – Prague, CZ) License © Creative Commons BY 4.0 International license © Tomáš Kliegr Joint work of Václav Zeman, Tomáš Kliegr, Vojtech Svátek URL https://doi.org/10.3233/SW-200413 Many commonly used machine learning algorithms are limited to tabular data sets, but real-world data is often stored in relational databases and increasingly in knowledge graphs. Processing of such data with standard „tabular” machine learning usually requires extensive data transformation and aggregations, resulting in a loss of information. As an alternative, relational Horn rules can be used to model complex relational structures naturally and use these in a range of machine learning tasks, including exploratory analysis, classification, and imputation of missing information. The RDFRules system for learning rules from knowledge graphs is based on the high-performance AMIE+ algorithm [1] and includes a number of improvements based on more than 10 years of experience with the development of its sister tabular EasyMiner [2] rule learning system. While the AMIE+ algorithm was initially designed for a narrower exploratory task of discovery of rules with the potential to perform knowledge graph (KG) completion, the current version of the RDFRules system goes significantly beyond the original capabilities of the AMIE+ algorithm [1] as it now makes possible to perform the following tasks: - load not only graph data in RDF but also relational databases described as SQL scripts, - specify fine-grained patterns to limit the search space, - preprocess numerical literals, - cluster discovered rules, - perform classification tasks, - evaluate results using standard metrics adapted to graph data and open world assumption, - support the KG completion task, The new features make it possible to graph-based rule learning directly on complex real-world data. The system is described in [3] and available at https://github.com/propi/rdfrules. References 3.17 The Child as Hacker Josh Rule (University of California – Berkeley, US) License Creative Commons BY 4.0 International license © Josh Rule URL https://hdl.handle.net/1721.1/129232 I describe the child as hacker hypothesis, which relates program induction with aspects of human cognition, particularly learning [1]. By the deep relationship proposed to exist between knowledge and program-like structures, the child as hacker treats the activities and values of human programmers as hypotheses for the activities and values of many forms of human learning. After introducing this idea, I then look briefly at a project where we’ve begun to implement it in a system called HL (Hacker-Like) [2]. HL explains human behaviour better than some recent alternative program induction systems by representing a concept not only in terms of its object-level content but also in terms of the inferences required to produce that content. By searching over both kinds of representations, HL learns orders of magnitude faster than competing systems. I close by discussing three major areas ripe for future research: i) developing a better empirical understanding of how people solve hard search problems; ii) understanding the neural and psychological basis for human computational abilities; and iii) better understanding the goals and values of human programmers. All three areas have the potential to significantly improve both our understanding of human intelligence and our ability to use program induction systems to solve complex problems. 3.18 Abstraction for Answer Set Programs Zeynep G. Saribatur (TU Wien, AT) In this talk, I present our notion of abstraction for answer set programming, a prominent rule-based language for knowledge representation and reasoning with roots in logic programming and non-monotonic reasoning. With the aim to abstract over the irrelevant details of answer set programs, we focus on two approaches of abstraction: (1) abstraction by omission [2], and (2) domain abstraction [1], and introduce a method to construct an abstract program with a smaller vocabulary, by ensuring that the original program is over-approximated. We provide an abstraction & refinement methodology that makes it possible to start with an initial abstraction and upon encountering spurious solutions automatically refining the abstraction until an abstract program with a non-spurious solution is reached. Experiments based on the prototypical implementations reveal the potential of the approach for problem analysis by focusing on the parts of the program that cause the unsatisfiability, some even matching a human-like focus shown by a user study, and by achieving generalization of the answer sets that reflect relevant details only. This makes abstraction an interesting topic of research whose further use in human-understandability of logic programs remains to be explored. References 3.19 Explanatory Inductive Programming (XAI for IP) *Ute Schmid (Universität Bamberg, DE)* --- License Creative Commons BY 4.0 International license © Ute Schmid Joint work of Johannes Rabold, Michael Siebers, Ute Schmid URL https://doi.org/10.1007/S10994-021-06048-W Methods of explainable artificial intelligence (XAI) and of inductive programming (IP) can profit from each other in two ways: (1) Inductive programming results in symbolic models (programs) which are inherently interpretable. Nevertheless, there might be a need for explainability to humans – end-users or domain experts from other areas than computer science. This perspective (XAI for IP) is addressed in this summary. (2) On the other hand, expressive, relational explanations for learned black box models, for instance Convolutional Neural Networks for image classification, can be provided by IP. This perspective (IP for XAI) is addressed in the contribution of B. Finzel in this report. The power of IP approaches lies in their ability to learn highly expressive models from small sets of examples [2]. Learned programs can support humans to get insights into complex relational or recursive patterns underlying a set of observed data. That is, IP might be an ultra-strong learning approach as defined by Donald Michie (see [3]) under the condition that the learning system can teach the learned model to a human, whose performance is consequently increased to a level beyond that of the human studying the training data alone. For programs which consist of several rules or for programs involving complex relations or recursion, different approaches to construct explanations might support human understanding. One possibility to reduce complexity is to introduce new predicates. For instance, the introduction of a predicate parent/2 as generalization for father/2 and mother/2, reduces four rules for the grandparent/2 relation to one (see [3]). Another possibility is, to translate the rule which covers the current instant to a verbal explanation for humans without background in computer science. This can be realized by simple template-based methods [5]. Alternatively, Large Language Models could be used. For effective teaching a concept to humans, near miss explanations have been proposed by [4]. Winston showed in his early work on learning rules for relational perceptual concepts such as arcs, that providing near misses rather than arbitrary negative examples results in faster convergence of the learned model. In cognitive science it has been shown that teaching concepts by their difference to similar concepts is much more efficient than contrasting them with more distant concepts (for a discussion of these aspects and references, see [4]). In [4] an algorithm for constructing near miss explanations is presented an applied to different domains. Furthermore, an empirical study is presented where it could be shown that in pairwise comparisons, participants preferred near miss explanations over other types of explanations as more helpful. Augmenting IP models with explanations can also be helpful to support medical decision making [1]. Here, it might be helpful to go beyond ultrastrong machine learning and bring the human expert in the loop for incremental model correction and adaption. In contrast to standard interactive machine learning, human feedback might go beyond label correction and allow human domain experts to also correct explanations which might be right for the wrong reasons. Correcting explanations can be seen as a special case of knowledge injection in human-in-the-loop IP which exploits such corrections for efficient model adaption. 3.20 Explainable models via compression of tree ensembles Sriram Natarajan (University of Texas at Dallas – Richardson, US) License © Creative Commons BY 4.0 International license Joint work of Siwen Yan, Sriraam Natarajan, Saket Joshi, Roni Khardon, Prasad Tadepalli Main reference Siwen Yan, Sriraam Natarajan, Saket Joshi, Roni Khardon, Prasad Tadepalli: “Explainable Models via Compression of Tree Ensembles” Mach. Learn.: 1-26 (2023) URL https://doi.org/10.1007/s10994-023-06463-1 We consider the problem of explaining learned (relational) ensemble models. Ensemble models (bagging and gradient-boosting) of relational decision trees have proved to be one of the most effective learning methods in the area of probabilistic logic models (PLMs). While effective, they lose one of the most important aspect of PLMs – interpretability. Our key hypothesis in this work is that combining large number of logical decision trees would yield in a more compressed model compared to that of combining standard decision trees. This is due to the fact that unification of variables in logic would allow for effective and efficient compression. To this effect, we propose CoTE – Compression of Tree Ensembles – that produces a single small decision list as a compressed representation. CoTE first converts the trees to decision lists and then performs the combination and compression with the aid of the original training set. Experiments on standard benchmarks demonstrate the value of this approach and justifies the hypotheses that compression is more effective in logical decision trees. 3.21 Inductive Programming meets Large Language Models Gust Verbruggen (Microsoft – Keerbergen, BE) Both inductive programming (IP) and large language models (LLMs) are able to complete a task from a few examples. Instead of pitting them against each other, together they can achieve a lot more. One example of such integration is FlashGPT, which iteratively uses witness functions to break an inductive programming problem into smaller subproblems until all are solved (FlashFill) and leverages an LLM to solve the subproblems that cannot be solved symbolically (GPT-3). Instead of reiterating what has been discovered, this talk focused on (a non-exhaustive list of) next steps for combining IP and LLMs. First, we discuss how the LLM can be used to improve learning in a fully symbolic IP system. Two approaches are (1) using the LLM to generate additional input-output examples for the IP system, or (2) using the LLM to generate candidate solutions to serve as seeds for initiating a search. The latter is a combination of component-based synthesis [1] and sketching, both of which rely on generating useful substructures over the grammar of the target language. Second, we show how the LLM can be used to improve the experience of working with an IP system, by providing natural language descriptions of the learned programs. Third, we show how the scope of IP can be improved with LLMs in systems that do not leverage witness functions. One potential method is masking semantic components, performing IP as usual and learning a program that emits masks, and then resolving the masks using an LLM. Fourth, we show how operators that only use the embeddings from LLMs strike a balance between the inference speed of symbolic operations and the number of examples and capabilities of semantic operations. When the domain of a semantic relation is finite, or when the task is extraction of relevant parts of the input, we can use embeddings of tokens from the input to capture semantic relations between input and output. References 3.22 Inductive Programming meets Real User Problems Gust Verbruggen (Microsoft – Keerbergen, BE) We show two novel applications of inductive programming that bring some unique challenges with respect to parsing user input. Both problems share some challenges: the need for speed, noisy input and labels, inferring constant values that adhere to a semantic bias, underspecification of the problem, and suppression of programs with low confidence. First, we consider the problem of predicting the folder to which an email should be moved. Popular email clients offer to automate this functionality by setting rules, and the expected output of our learner is thus such a rule. An additional challenge with this problem is concept drift. Our approach [1] learns simple propositional rules in conjunctive normal form by generalizing (if an email is mistakenly not covered) or specializing (if an email is mistakenly covered) the rule corresponding to a folder. Because we guarantee that all historical emails are correctly classified, we easily adapt to concept drift. This classic inductive programming approach performs better than many neural and hybrid baselines. Second, we consider the problem of learning conditional formatting rules in spreadsheets. An additional challenge is the scope of functions that can be used. Our approach [2, 3] uses semi-supervised clustering of input values to tackle underspecification, then learns different rules as decision trees, and ranks them with a learned ranker. Our corpus of 102K rules from real spreadsheets allows this ranker to encode the semantic bias, which allows us to outperform many neural and symbolic approaches, even if they have access to the same set of base predicates. References 1 Mukul Singh, José Cambronero, Sumit Gulwani, Vu Le, Gust Verbruggen: EmFore: Online Learning of Email Folder Classification Rules. CIKM 2023: 2280-2290 https://doi.org/10.1145/3638780.3614863 2 Mukul Singh, José Pablo Cambronero Sánchez, Sumit Gulwani, Vu Le, Carina Negreanu, Mohammad Raza, Gust Verbruggen: CORNET: Learning Table Formatting Rules By Example. Proc. VLDB Endow. 16(10): 2632-2644 (2023) https://doi.org/10.14778/3603581.3603600 3.23 Probabilistic Logic Programming: Quo Vadis? Felix Weitkämper (LMU München, DE) Probabilistic Inductive Logic Programming refers to learning probabilistic relational programs from “examples”. These could be probabilistic logic programs, but many considerations also apply to learning other statistical relational models. From the perspective of statistical relational artificial intelligence, this is usually referred to as structure learning. Probabilistic Inductive Logic Programming is key in several areas of artificial intelligence, including knowledge discovery in stochastic, relational domains and causal structure discovery in a Boolean relational setting. Probabilistic inductive logic programming is traditionally considered difficult, since it adds another dimension to the classical ILP problem. Current approaches are still based on traditional ILP approaches developed in the 1990s, while the field of ILP has since made huge progress: Metaintepretive learning provides a new conceptual framework for rethinking Inductive Logic Programming, Constraints and learning from failures can help prune the search space, and Powerful ASP encodings can be leveraged to achieve more consistent outcomes. This raises the question: Can we leverage these modern techniques for PILP? 4 Working groups 4.1 Large Language Models and Inductive Programming in Cognitive Architectures Bettina Finzel (Universität Bamberg, DE) and Frank Jäkel (TU Darmstadt, DE) Cognitive architectures provide frameworks to simulate and test principles of cognition [1]. There are different components in cognitive architectures that qualify for being enhanced by large language models (LLMs) [3] and inductive programming (IP) [2]. LLMs could be used in the production module to generate rules for execution, in the memory module as a compressor of information for more efficient access and possibly as the stimuli of a general cognitive architecture as a model that produces outputs on which further reasoning could be applied for decision making and learning. An open challenge remains in mimicking the abilities of humans to switch between modalities in the sense that they are able to dynamically choose between the representations they need. With respect to this, we were discussing about some form of reward or reinforcer to increase the response for certain signals or items in the process of inference and problem solving. Combining learning and reasoning by integrating LLMs and IP in a cognitive architecture could be an enabler for validating programs that get executed by the overall architecture and to possible get nearer to human performance. References 4.2 Avoiding too much search in Inductive Programming Ute Schmid (Universität Bamberg, DE), David Cerna (The Czech Academy of Sciences – Prague, CZ), and Hector Geffner (RWTH Aachen, DE) A crucial part of inductive programming (IP) is search. Since search is costly, an important question is how we can avoid to search so much or too much. What we search for can be very different things: logic or functional programs, but also decision lists, policies, classifications, language representations or deep learned models. How search is performed can be also realized with many different approaches: enumeration, anti-unification, genetic programming, greedy strategies, combinatorial optimization, stochastic gradient descent, deep reinforcement learning, or Monte Carlo Tree Search. In general, we do need to learn structure as well as probabilities. To evaluate the quality of the learned program, different aspects might be focused on alone or in combination which is a challenge for search. Obvious criteria are sample complexity and scalability. But one might also be interested in novelty of the learned program, how similar is the inductive strategy to human learning (humans do not enumerate first and then select but typically generalise over few examples). To push research on becoming more search efficient, a set of benchmark problems and a competition should be introduced. Promising challenge data sets might come from the FlashFill domain (learning more complex Excel functions and string transformations), the abstract reasoning challenge (ARC) and the modified ILP version might be interesting, furthermore, we could look at problems from the International Math Olympiad Challenge. We should critically evaluate for which problems deep learning/generative approaches are more successful and hopefully identify a class of problems where symbolic IP is superior. For instance, the IP system FlashFill performs better than the transformer-based SmartFill. The core difference between neural network approaches and symbolic IP is that IP returns explicit programs which give the intensional characterisation of the input/output-examples while neural networks are extensional representations. Therefore, one might postulate that neural networks cannot learn recursion. Currently, string transformation problems are often either syntactic (return the first letter of a string) – which works very well for symbolic IP – or semantic (give the capital for a country) – where generative AI is very good at. Maybe we should look for transformation problems which combine syntactic and semantic transformations (return the first letter for every string which names a capital). As it is so often the case, it might be a good idea to combine symbolic IP and deep learning/generative approaches. A paper we should look at is [1]. References 1 Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, François Charton: Deep symbolic regression for recurrence prediction. ICML 2022: 4520-453 4.3 Evaluation Criteria for Interpretability and Explainability of Inductive Programming Ute Schmid (Universität Bamberg, DE), Lun Ai (Imperial College London, GB), Claudia d’Amato (University of Bari, IT), and Johannes Fürnkranz (Johannes Kepler Universität Linz, AT) License ☕ Creative Commons BY 4.0 International license © Ute Schmid, Lun Ai, Claudia d’Amato, and Johannes Fürnkranz Inductive programming results in symbolic models (programs) which are inherently interpretable. Nevertheless, there might be a need for explainability to humans – end-users or domain experts from other areas than computer science. In the discussion group we focused on the question of how to measure the quality of interpretable representations (programs) and of post-hoc generated explanations. The main challenge is to provide for assessment metrics which are not dependent on studies with humans but which can be evaluated directly for the interpretation/explanations. A core difficulty is that the quality of an explanation is context-dependent: it depends on what is explained to whom in what way (how) and for what reason (why). The way to explain something can be a set of symbolic rules (learned with IP or a rule learning system or extracted from a neural net), highlighting important features (which is done by many XAI approaches such as LIME, SHAP or LRP), a natural language explanation, prototypical or near miss examples. Furthermore, explanations can be more abstract or give more details. Explanations can either be constructed to explain for what reason a learned (black box) model gave a specific output (mechanistic explanation) or to explain the learned content to a human (functional explanation, ultra-strong machine learning). As candidates for assessment metrics we discussed (1) complexity measures (proposals for cognitive complexity measures, structural information theory, Kolmogorov complexity), (2) semantic coherence, (3) reliability of a component (of a program) which can result in abstracting this part away if a human has sufficient/justified trust. A further aspect for evaluation might be a suitable trade-off between the size of the explanation (memory) and the effort to interpret it (run time) as proposed, for instance by Donald Michie [1] or Lun Ai [2]. A program itself can be a good explanation, depending of its complexity. Abstraction might be a useful method to make explanations more comprehensible. Here approaches like predicate/function invention, anti-unification, introducing higher-orderness (such as map/fold), or compression might be helpful. A hierarchy of abstractions can be helpful for providing the ‘right’ level of detail for a given explanatory context. There are first approaches of explanations as a dialogue where more detailed or different forms of explanations can be presented to a human [3]. Learned (Prolog) programs are also suitable in this context: The highest level of abstraction refers to a single (left-hand/target/head) predicate, the next level of detail can be achieved by presenting the instantiated right-hand side of a rule (or a verbal description of it), continued by expanding predicates in the body until ground facts are reached. Recently, an explainable version FlashFill has been developed. It showed that users sometimes reject a FlashFill rule which correctly covers the examples because they do not understand it. Here approximate symbolic regression has been applied to provide simpler explanations [4]. In the group of Josh Tenenbaum, the system LILO [5] has been developed which provides explanations by abstraction. A final idea on assessing the quality of an interpretation/explanation has been to input explanations of a learned programs to a LLM, let the LLM generate a program from that and then compare the originally synthesized program with the one generated by the LLM (a kind of loss function). Comparison can be done by behavioral comparison for test cases or by comparing the code. References 3 Bettina Finzel, David Elias Tafler, Anna Magdalena Thaler, Ute Schmid: Multimodal Explanations for User-centric Medical Decision Support Systems. HUMAN@AAAI Fall Symposium 2021 4.4 Finding Suitable Benchmark Problems for Inductive Programming Ute Schmid (Universität Bamberg, DE), Martin Berger (University of Sussex – Brighton, GB), Sebastijan Dumancic (TU Delft, NL), Nathanaël Fijalkow (CNRS – Talence, FR), and Gust Verbruggen (Microsoft – Keerbergen, BE) License © Creative Commons BY 4.0 International license © Ute Schmid, Martin Berger, Sebastijan Dumancic, Nathanaël Fijalkow, and Gust Verbruggen To advance progress as well as visibility of IP, a collection of suitable benchmarks, convincing use cases, and joint formats to represent problems, as well as starting an IP challenge have been identified as helpful. In the discussion group, we focussed on benchmark sets. First we collected problems currently used in different groups: List problems, regular expressions (RegEx), boolean language inference, competitive programming, Math Olympiad Challenge, Reasoning/Theorem Proving, Planning, Knowledge Graphs, Zendo, Games, Navigation, Biology, standard ML benchmarks (UCML Repository), natural language to programs (NL2P), abstract reasoning challenge (ARC). Then we discussed what characteristics benchmark problems should have: tunable, clear performance metrics, standard format, correct annotations, noise, social recognition/PR, breadth, not solvable by LLMs (alone), conceptual jumps, linkable to external resources, curriculum, dramatic finish line, doable by humans. Several of these characteristic were discarded. For instance, clear metrics (beyond just right or wrong) did not seem to be a good fit (but see discussion results about explainability). Format has also been seen as not relevant compared to having good environments to execute and evaluate learned programs and tools/environments which are easily usable. For a selected set of characteristics, we identified that problems from the list above which fulfill the respective characteristic: - Tunable: List problems, RegEx, Boolean languages, Knowledge Graphs, Games, Navigation, (competitive programming) - Breadth: List problems, competitive programming, Games, NLP2P, Math Olymp, ARC - Not solvable by LLM: List problems, Knowledge Graphs, Math Olymp, ARC - Dramatic finish: Math Olymp, Biology, NLP2P, ARC - Curriculum: List problems, RegEx, Math Olymp, Navigation - Doable by (average) humans: List Problems, RegEx, Games, Navigation, ARC Given the number of criteria which are met, the following problem domains have been identified as the most promising ones: List problems (including string transformations and other domain-specific language based approaches), RegEx, Math Olymp, ARC. We than had a further critical look at the selected problem classes and evaluated the following aspects: Not suitable for application, lack of format, producable, lack of probabilistic benchmarks, tension between standard and generation, domain specific problems, not perceived as difficult/relevant, need/miss to have a relational core, loss of propositional benchmarks, lack of diversity of evolution, novelty (invent a new sorting algorithm, automated computer scientist), plugable. After discussing these additional aspects, we came up with the following set of potentially interesting benchmark problems: - The Automated Computer Scientist (from Andrew Cropper): learning novel (e.g. sorting) algorithms or novel data structures [1] - Joint IP and KG (Knowledge Graph) problems, especially for combining syntactic and semantic transformations (e.g. give the capital for a country and than take the first letter of it), this can be list problems or Excel tables [5] - Strategy learning (explicit compared to implicit policy learning in reinforcement learning): for human problem solving, planning (look at problems from the planning competition) [2, 3, 4] - Online encyclopedia of integer sequences OEIS (not for all of them exists a closed formula) - Expert domains: learning strategies/patterns for SAT-solvers, theorem provers - Constructing ML pipelines (AutoML) Links to benchmark data sets: - Popper’s (includes Zendo, many others) https://github.com/logic-and-learning-lab/Popper/tree/main/examples - SyGuS (includes list problems, FlashFill, phone numbers) https://github.com/SyGuS-Org/benchmarks - IP Repository (programming benchmarks, including problem solving like Tower of Hanoi) https://www.inductive-programming.org/repository.html - Regular expressions https://codalab.lisn.upsaclay.fr/competitions/15096 - Boolean language inference https://www.iwls.org/contest/ - Competitive programming https://github.com/openai/human-eval - Planning https://github.com/AI-Planning/pddl-generators - Abstract Reasoning Challenge (ARC) [7, 8] https://github.com/fchollet/ARC https://lab42.global/arc/ - Rule learning https://github.com/kliegr/arcbench Panel discussions 5.1 Inductive Programming – How to Go On? Ute Schmid (Universität Bamberg, DE), Claudia d’Amato (University of Bari, IT), Hector Geffner (RWTH Aachen, DE), Sriraam Natarajan (University of Texas at Dallas – Richardson, US), and Josh Rule (University of California – Berkeley, US) In a final discussion we addressed topics and activities to make scientific progress and make the topic more visible. A crucial problem might be that there is no core general approach to IP (such as gradient descent for neural networks). The most prominent IP task is to learn programs from input/output examples. Other approaches address learning programs from traces or constraints. Methods range from classic inductive generalization and folding for induction of functional programs over genetic and evolutionary programming to a collection of ILP methods (sequential covering, theta-subsumption, combining with tools from answer set programming). Relevant use cases might not have a focus on learning recursion/loops but on relations (e.g. in medicine and biology). The focus on learning programs (including recursion) might profit from using Python as target language. Furthermore, current IP systems are mostly not easy to find and to use. Therefore, a toolbox which can be easily used (such as Weka for standard ML) might be helpful. Currently Sebastijan Dumancic is working on such a tool box (Herb.jl). A collection of data sets and benchmark problems (see summary of the discussion group about benchmarks) would also be very helpful, especially when they are given in a standardized, easy to parse format. To make the field of IP less distributed, it might be helpful to write a primer to IP including the classic approaches of inductive functional programming and relate also to deductive and transformational program synthesis methods and genetic/evolutionary programming. Papers falling in this category are: As outcome of the AAIP 2023 seminar we plan to publish a book “Inductive Programming” which contains systematic introductions into the core topics as well as a collection of recent work (from participants of the seminar plus an open call for contributions) addressing topics such as “New Approaches to IP”, “Cognitive Aspects of IP”, “Applications of IP”. Furthermore, it has been proposed to apply for an IP workshop at IJCAI and to try to include IP as a topic at the next European Summer School on AI (ESSAI). It might also be helpful for visibility and community building to propose a COST Network on IP. The website https://www.inductive-programming.org/ should be kept but made more general and link from there to a github for IP. The Wikipedia Entry on Inductive Programming https://en.wikipedia.org/wiki/Inductive_programming should be updated by the community. We also might make at least a tag inductiveprogramming at linkedin which the IP community should include in posts. More persons of the community should give Inductive Programming as Keyword in their google scholar profile. We could sample all videos related to IP in a YouTube channel, and we could produce a 3 minute introductory video. Participants - Lun Ai Imperial College London, GB - Martin Berger University of Sussex – Brighton, GB - David Cerna The Czech Academy of Sciences – Prague, CZ - David J. Crandall Indiana University – Bloomington, US - Claudia d’Amato University of Bari, IT - Luc De Raedt KU Leuven, BE - Sebastijan Dumančić TU Delft, NL - Kevin Ellis Cornell University – Ithaca, US - Nathanaël Fijalkow CNRS – Talence, FR - Bettina Finzel Universität Bamberg, DE - Johannes Fürnkranz Johannes Kepler Universität Linz, AT - Hector Geffner RWTH Aachen, DE - Céline Hocquette University of Oxford, GB - Frank Jäkel TU Darmstadt, DE - Emanuel Kitzelmann Technische Hochschule Brandenburg, DE - Tomáš Kliegr University of Economics – Prague, CZ - Maithilee Kunda Vanderbilt University – Nashville, US - Johannes Langer Universität Bamberg, DE - Sriraam Natarajan University of Texas at Dallas – Richardson, US - Stassa Patsantzis University of Surrey – Guildford, GB - Josh Rule University of California – Berkeley, US - Zeynep G. Saribatur TU Wien, AT - Ute Schmid Universität Bamberg, DE - Gust Verbruggen Microsoft – Keerbergen, BE - Felix Weitkämper LMU München, DE
{"Source-Url": "https://drops.dagstuhl.de/storage/04dagstuhl-reports/volume13/issue10/23442/DagRep.13.10.182/DagRep.13.10.182.pdf", "len_cl100k_base": 13708, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 69950, "total-output-tokens": 19200, "length": "2e13", "weborganizer": {"__label__adult": 0.00048732757568359375, "__label__art_design": 0.000762939453125, "__label__crime_law": 0.0004508495330810547, "__label__education_jobs": 0.005840301513671875, "__label__entertainment": 0.00016868114471435547, "__label__fashion_beauty": 0.00027871131896972656, "__label__finance_business": 0.0003724098205566406, "__label__food_dining": 0.00048828125, "__label__games": 0.000957489013671875, "__label__hardware": 0.0009627342224121094, "__label__health": 0.0009136199951171876, "__label__history": 0.0004723072052001953, "__label__home_hobbies": 0.00021708011627197263, "__label__industrial": 0.0006814002990722656, "__label__literature": 0.0007238388061523438, "__label__politics": 0.0004651546478271485, "__label__religion": 0.0008091926574707031, "__label__science_tech": 0.120849609375, "__label__social_life": 0.00025582313537597656, "__label__software": 0.01021575927734375, "__label__software_dev": 0.85205078125, "__label__sports_fitness": 0.0003674030303955078, "__label__transportation": 0.0007548332214355469, "__label__travel": 0.0002440214157104492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79026, 0.03385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79026, 0.6516]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79026, 0.87758]], "google_gemma-3-12b-it_contains_pii": [[0, 2229, false], [2229, 6436, null], [6436, 6959, null], [6959, 10023, null], [10023, 11620, null], [11620, 14498, null], [14498, 17062, null], [17062, 19647, null], [19647, 22766, null], [22766, 25236, null], [25236, 28717, null], [28717, 31976, null], [31976, 33964, null], [33964, 36076, null], [36076, 39256, null], [39256, 42461, null], [42461, 45542, null], [45542, 47272, null], [47272, 51152, null], [51152, 52741, null], [52741, 55534, null], [55534, 58813, null], [58813, 61642, null], [61642, 65078, null], [65078, 68581, null], [68581, 71674, null], [71674, 73852, null], [73852, 75470, null], [75470, 77826, null], [77826, 79026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2229, true], [2229, 6436, null], [6436, 6959, null], [6959, 10023, null], [10023, 11620, null], [11620, 14498, null], [14498, 17062, null], [17062, 19647, null], [19647, 22766, null], [22766, 25236, null], [25236, 28717, null], [28717, 31976, null], [31976, 33964, null], [33964, 36076, null], [36076, 39256, null], [39256, 42461, null], [42461, 45542, null], [45542, 47272, null], [47272, 51152, null], [51152, 52741, null], [52741, 55534, null], [55534, 58813, null], [58813, 61642, null], [61642, 65078, null], [65078, 68581, null], [68581, 71674, null], [71674, 73852, null], [73852, 75470, null], [75470, 77826, null], [77826, 79026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79026, null]], "pdf_page_numbers": [[0, 2229, 1], [2229, 6436, 2], [6436, 6959, 3], [6959, 10023, 4], [10023, 11620, 5], [11620, 14498, 6], [14498, 17062, 7], [17062, 19647, 8], [19647, 22766, 9], [22766, 25236, 10], [25236, 28717, 11], [28717, 31976, 12], [31976, 33964, 13], [33964, 36076, 14], [36076, 39256, 15], [39256, 42461, 16], [42461, 45542, 17], [45542, 47272, 18], [47272, 51152, 19], [51152, 52741, 20], [52741, 55534, 21], [55534, 58813, 22], [58813, 61642, 23], [61642, 65078, 24], [65078, 68581, 25], [68581, 71674, 26], [71674, 73852, 27], [73852, 75470, 28], [75470, 77826, 29], [77826, 79026, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79026, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
f92325c26ba0a74688f66a76566567069ca3ce6d
Bootstrapping Parameter Space Exploration for Fast Tuning Jayaraman J. Thiagarajan∗ Lawrence Livermore National Laboratory jayaramanthi1@llnl.gov Nikhil Jain† Lawrence Livermore National Laboratory nikhil@llnl.gov Rushil Anirudh Lawrence Livermore National Laboratory anirudh1@llnl.gov Alfredo Gimenez Lawrence Livermore National Laboratory gimenez1@llnl.gov Rahul Sridhar University of California, Irvine rsridha2@uci.edu Aniruddha Marathe Lawrence Livermore National Laboratory marathe1@llnl.gov Abhinav Bhatele Lawrence Livermore National Laboratory bhatele@llnl.gov Tao Wang North Carolina State University twang15@ncsu.edu Murali Emani Lawrence Livermore National Laboratory eman1@llnl.gov Todd Gamblin Lawrence Livermore National Laboratory gamblin2@llnl.gov ABSTRACT The task of tuning parameters for optimizing performance or other metrics of interest such as energy, variability, etc. can be resource and time consuming. Presence of a large parameter space makes a comprehensive exploration infeasible. In this paper, we propose a novel bootstrap scheme, called GEIST, for parameter space exploration to find performance-optimizing configurations quickly. Our scheme represents the parameter space as a graph whose connectivity guides information propagation from known configurations. Guided by the predictions of a semi-supervised learning method over the parameter graph, GEIST is able to adaptively sample and find desirable configurations using limited results from experiments. We show the effectiveness of GEIST for selecting application input options, compiler flags, and runtime/system settings for several parallel codes including LULESH, Kripke, Hypre, and OpenAtom. CCS CONCEPTS • General and reference → Performance; • Theory of computation → Semi-supervised learning; • Computing methodologies → Search with partial observations; ∗J.J. Thiagarajan and N. Jain contributed equally to this work †The corresponding author KEYWORDS autotuning, sampling, performance, semi-supervised learning ACM Reference Format: 1 INTRODUCTION As the complexity of High-Performance Computing (HPC) and big-data systems, software stacks, and applications continue to rise, achieving high performance has become difficult. Most components of these ecosystems are increasingly becoming more configurable, and to maximize performance, correctly configuring these components has become essential. To illustrate this concern, Figure 1 shows the distribution of runtime for Kripke [21], a transport code, with different configurations. Here, performance varies by 1000× depending on the choice of application parameter values for a constant input problem. The number of tunable parameters that a user can configure has increased linearly, and as a result, the overall parameter space has grown exponentially. In addition, optimizing for performance metrics other than execution time, such as energy consumption, has become increasingly essential1. Exhaustively evaluating parameter combinations for these different dependent variables is intractable, and hence automatic exploration of parameter space, called autotuning, is desirable. 1Throughout this paper, we use "performance" as a generic term to refer to the metric being optimized, such as execution time, energy, and variability. Autotuning requires quantifying the effects that different parameters will have on performance. However, making this determination \textit{a priori} is usually infeasible, as it would require constructing complex models for a variety of available parameters and system environments. Therefore, autotuning frameworks typically employ empirical approaches by collecting performance samples and adjusting a model to fit them. However, collecting a large number of performance samples can be prohibitively expensive as individual runs may take minutes to hours to complete. Autotuning therefore requires methods to automatically reduce the search space of possible configurations to avoid expensive training while retaining enough information to determine performance-optimizing configurations. Traditional methods for autotuning are typically built upon heuristics that derive from experience [9, 14]. Many of these methods often need to be reworked as new parameters become available. Further, several existing approaches utilize simple prediction techniques such as linear regression, and hence require a reasonably large number of samples for better decision making. Recent work has shown promise in the use of sophisticated statistical learning techniques to build accurate and generalizable models, thus reducing the overheads of autotuning [23, 26]. In particular, \textit{adaptive sampling}, a technique in which sample collection is performed incrementally, has produced encouraging results [10]. In this paper, we develop a new approach to minimize the number of samples being collected in order to identify high-performing configurations, while minimizing the time spent in exploring sub-optimal configurations. Our approach, named \textit{Good Enough Iterative Sampling for Tuning} (GEIST), uses semi-supervised learning to effectively guide the search of high-performing configurations, while being robust to the choice of the initial sample set. Specifically, this paper makes the following contributions: - We introduce GEIST, a novel semi-supervised learning-based adaptive sampling scheme for parameter space exploration. - We show that GEIST finds performance optimizing configurations for different types of parameters including application input options, compiler flags, and runtime/system settings. - We show that GEIST outperforms expert configuration selection and known sampling approaches based on random selection, Gaussian Process [10], and Canonical Correlation Analysis [13]. - We show that GEIST uses only up to 400 samples for effectively exploring parameter spaces with up to 25,000 configurations. 2 RELATED WORK Active Harmony is one of the earliest projects aimed at automatic tuning of HPC applications [8, 9]. Since then, a variety of modeling-based methods have been developed for fine-tuning system parameters [11, 29, 31]. At the compiler level, researchers have designed machine learning-based techniques for automatic tuning of the iterative compilation process [25] and tuning of compiler-generated code [24, 28]. Furthermore, several tuning approaches have been developed for application parameter spaces [2, 3]. In general, these approaches target a specific type or subset of parameters, and are often restricted to a component or domain in the HPC or big-data workflow. In contrast, the proposed work does not rely on any domain-specific knowledge, and can take into account the combined influence of different types of parameters. There also exists a class of autotuners that are designed for multi-objective optimization, examples include RSGDE3 [16], Periscope Tuning Framework [14], and ANGEL [6]. These approaches support only specific types of parameters and certain distributions of the target variable, and operate towards an absolute user-informed objective on the target variable. On the contrary, our approach is designed for handling different types of parameters and distributions, and does not need any form of user-input. Another important class of methods in this research direction attempt to reduce the resources/time spent in autotuning, through the use of machine learning techniques. Rafiki [22] combines neural networks and genetic algorithms to optimize NoSQL configurations for Cassandra and ScyllaDB. RFHOC [4] uses a random-forest approach to search the Hadoop configuration space. Jamshidi et al. [19] and Roy et al. [26] proposed the use of transfer learning for predicting performance on a target architecture using data collected from another architecture. On the other hand, Grebhn et al. [15] and Marathe et al. [23] utilized transfer learning to select high-performing combinations at a target configuration using domain knowledge extracted from other low-cost configurations. In contrast, our approach relies solely on samples collected for the target problem, and minimizing the number of samples collected is a core objective. Further, our approach avoids the need to build models that perform well for the entire configuration space, and thus needs fewer samples. The proposed work is most similar to prior efforts that apply statistical machine learning techniques to bootstrap the configuration sampling process [10, 13]. Ganapathi et al. [13] proposed a Kernel Canonical Correlation Analysis (KCCA)-based approach to derive the relationship of parameters with performance and energy. Duplyakin et al. [10] present a Gaussian Process Regression-based method to minimize search space for building regression-based methods in HPC performance analysis. In the paper, we will present a detailed comparison of our approach with these approaches, and show that the proposed approach outperforms these approaches. 3 BOOTSTRAPPING WITH GEIST The main aim of the proposed work is to identify the best performing configurations for a given application and parameter options. Although well defined, the space formed by all possible parameters is impractically large in many cases, as a result of which an exhaustive search is infeasible. This section outlines the proposed strategy for smart sampling, which seeks to identify the configurations that result in optimal performance, while observing only a fraction of the entire parameter space. 3.1 Performance Tuning as Adaptive Sampling Exploring high-dimensional parameter spaces is ubiquitous in different application domains in HPC. One popularly adopted approach for this is to select a subset of samples from the parameter space with the goal of achieving an optimization objective. In our context, a sample corresponds to a specific configuration of system/application-level parameters, while sample collection amounts to actually running the application with a chosen configuration. Most often, the optimization objective is to identify high-performing configurations, if not the best. The size and complexity of the parameter space can vary significantly across different use cases, thus making it challenging to design a sequential sampling scheme that performs consistently well across use cases. On one extreme, with no prior knowledge about the space, the best one can do is to randomly draw a configuration from the parameter space. On the other extreme, an expert user can make an informed choice based on experience. While the former approach is prone to large variability in the achievable performance, the latter can be limited by the lack of a comprehensive understanding of the interactions between different parameters. Consequently, in practice, an iterative approach is utilized to progressively obtain samples from regions of high-performance in the parameter space, as determined by a predictive model. Commonly referred to as adaptive sampling or active learning [27], this approach employs a surrogate model to emulate the process of running the experiment and measuring the performance of a configuration by directly predicting the performance metric. However, such a surrogate model can be plagued by large bias and variance characteristics, arising due to the large range of the metric values, and the lack of a sufficient number of training samples, respectively. Hence, resampling distributions inferred based on the resulting models can be highly misleading. 3.2 Modeling Parameter Spaces using Graphs In order to address the crucial challenge posed by bias and variance characteristics, we develop a novel bootstrapping approach, called Good Enough Iterative Sampling for Tuning (GEIST), for fast tuning of parameters to achieve optimal performance. In GEIST, 1) we represent parameter spaces using undirected graphs, 2) transform the performance metric prediction task into a categorical label prediction task, 3) utilize a state-of-the-art semi-supervised learning technique for label propagation, and 4) perform an iterative sampling pipeline that effectively explores the regions of high-performing parameter configurations. In the rest of this section, we describe this proposed approach. In contrast to conventional supervised learning approaches, the problem of finding high performing configurations more naturally fits a transductive learning framework [20]. In transductive learning, we assume access to the exhaustive set of samples (only configurations, not their performance) in the space that need to be classified, prior to building the model. Given the input set of parameters and their potential values for each application or use case, the exhaustive set of parameter configurations can be easily constructed, thus enabling the use of transductive learning. Conversely, transductive learning is better suited for the given problem because a broad class of semi-supervised learning methods, which often represent high-dimensional data concisely using neighborhood graphs, fall into this category. The edges in the graph encode the necessary information to perform crucial tasks such as information propagation and data interpolation. Thus, these methods can take advantage of the conventional autotuning wisdom that a high-performing configuration is typically near other high-performing configurations in the parameter space. Let $G = (V, E)$ denote a undirected graph, where $V$ is the exhaustive set of parameter space configurations ($|V| = N$ nodes), and $E$ is the set of edges, indicating similarity between nodes. In our context, the exhaustive set of parameter configurations $S \subseteq \{x_i\}_{i=1}^N$ is used to construct the neighborhood graph $G$, where each node is connected to its $k$ nearest neighbors determined based on the Manhattan distance ($\ell_1$ norm). 3.3 Reformulating Performance Prediction As discussed in Section 3.1, using the performance metric as a response variable can lead to models with high bias and variance. Hence, we resort to transforming the continuous performance metric into a categorical variable (optimal/non-optimal) and employ semi-supervised label propagation to predict the labels at all configurations in $S$. Given a relatively small, initial sample set $S_0 = \{x_i\}_{i=1}^{N_0}$ generated using uniform random sampling, we perform the experiments and build the dataset comprised of the tuples $\{(x_i, y_i)\}_{i=1}^{N_0}$ of size $N_0$, where $y_i$ denotes the performance metric (e.g., run time or energy) for each case. Without loss of generality, we always define our performance metric in such a way that its value needs to be minimized. Following this, we transform the performance metric for each sample into a categorical label: $$L(x_i) = \begin{cases} \text{optimal}, & \text{if } y_i \leq \Delta_t, \\ \text{non-optimal}, & \text{otherwise}, \end{cases}$$ \hspace{1cm} (1)$$ where $\Delta_t$ denotes the threshold on the performance metric to qualify an experimental run as "optimal". The choice of the hyperparameter $\Delta_t$ will be discussed in Section 4.3.1. 3.4 Semi-Supervised Label Propagation We now describe how the performance labels are propagated using the parameter space graph and training sample set. The problem of propagating labels to nodes in a graph has been well-studied in the machine learning literature under the context of semi-supervised learning [5]. Formally, given a partially labeled graph $G$, label propagation is aimed at estimating the label probability $p_k$ that a node $i$ is associated with label $k$. Based on these estimated probabilities, a classification function $C(x_i) = \arg \max_k p_{ik}$ can then be used to predict the label for that node. In this paper, we utilize Confidence Aware Modulated Label Propagation (CAMLPI) [30], **Parameter Space** **Sampling** Uniform [we used the CAMLP algorithm recursively propagates the information and its optimality (orange denotes optimal steps. The larger sized nodes indicate the configurations for which the working of both the graph construction and label propagation form of the expression in Eq. (2). Figure 2 (right) demonstrates of the label propagation respectively. Note that this is the matrix converges to the final predictions by iteratively computing estimate in Eq. (2). In summary, CAMLP starts with arbitrary values for $p_{ik}$ and converges to the final predictions by iteratively computing $$P^t = Z^{-1} \left( \beta + \beta WP^{t-1} \right),$$ where $t$ and $t - 1$ correspond to the current and previous iterations of the label propagation, respectively. Note that this is the matrix form of the expression in Eq. (2). Figure 2 (right) demonstrates the working of both the graph construction and label propagation steps. The larger sized nodes indicate the configurations for which we have already collected the data, and the node color indicates its optimality (orange denotes optimal). Using the graph structure, the CAMLP algorithm recursively propagates the information and predicts the label at every other unlabeled node in the space (smaller sized nodes). This process has effectively created a distribution in the parameter space that indicates that every orange node has an equally likely chance of being a high-performing configuration, while blue nodes have no evidence of being high-performing. We utilize this labeling scheme to design an iterative algorithm for progressively sampling expected high-performing configurations from $S$, while avoiding the selection of other configurations. 3.5 **GEIST Algorithm** An overview of the proposed iterative scheme that utilizes the techniques described in this section so far is shown in Figure 2 (left) and Algorithm 1. Starting with a uniformly random selection of training samples from the parameter space as the bootstrap set, GEIST uses semi-supervised label propagation to identify potentially optimal candidates from the unseen set. For a random subset of those potentially optimal candidates, experimental results are obtained and the subset is added to the bootstrap set. Next, the steps of semi-supervised label propagation, random subset selection from the potentially optimal candidates, experimental results collection for the subset, and expansion of the bootstrap set using the subset are performed iteratively. The number of iterations for which GEIST is run can either be determined by the number of experiments that can be executed based on resource availability, or can be based on the configurations obtained in every iteration. For example, if the minimum runtime of configurations obtained so far does not improve in consecutive iterations, the process can be terminated. Overall, the iterative process of GEIST is trying to explore neighborhoods of high-performing configurations in order to find more high-performing configurations. As such, unlike conventional convex optimization strategies, GEIST does not rely on a single gradient direction to identify the global minimum. Instead, the semi-supervised learning strategy of GEIST can be interpreted as a collection of multiple locally meaningful models, which ends up sampling both local and global minima alike. Intuitively, by progressively sampling in this way, GEIST can better resolve different neighborhoods in the parameter space, and potentially even identify the globally optimal configuration, $s_{opt}$. Algorithm 1 GEIST Algorithm 1: Inputs: 2: Parameter space $S$, initial sample size $N_0$, threshold $\Delta_r$, number of iterations $T$, number of samples added in each iteration $N_s$. 3: procedure 4: Initialize bootstrap set $B = \{\}$. 5: Initialize unseen test set $U = S$. 6: Generate a uniform random sample $S_0$ of size $N_0$ from $S$. 7: Update $B = B \cup S_0$. 8: Construct neighborhood graph $G$ for $S$. 9: loop for $T$ iterations: 10: Run experiments for samples in $B$ and build $\{(x_i, y_i)\}_{i \in B}$. 11: Update $U = U \setminus B$. 12: Compute categorical label $L(x_i)$, $x_i \in B$ using Eq. 1. 13: Predict the labels for all configurations in $U$ using CAMLP. 14: Randomly select $N_s$ optimal cases from $U$ to build $S_i$. 15: Update $B = B \cup S_i$. 3.6 Success Metrics A high-fidelity adaptive sampling strategy is expected to recover most of the optimal configurations while observing the least number of training samples. In a typical scenario, this is measured by the accuracy of the semi-supervised learning approach. However, such an evaluation is not applicable here since we are not interested in recovering the low-performing configurations, and thus are not trying to generate a methodology that predicts well for the entire parameter space. As a result, we adopt the following metrics: 1. Percentile score of $\Delta_r$ (PSD-L). This measures how many samples have been added below the initial tolerance threshold $\Delta_r$. A good sampling strategy is expected to add a large number of configurations with performance metric $y_i$ lower than the initial threshold $\Delta_r$, and thus lower the cost of sample collection. We measure PSD-L in the bootstrap set $B$ during every iteration, and expect it to increase in every iteration. 2. Percentile score of $\Delta_h$ (PSD-H). Like $\Delta_r$, let us define $\Delta_h$ to be the threshold beyond which a configuration is qualified as a low-performing configuration. PSD-H measures how many samples are added above the threshold $\Delta_h$. We expect a good strategy to minimize the inclusion of low-performing configurations, and consequently, we also expect it to increase in every iteration. 3. Best Performing Configuration (BPC). A more straightforward metric is to track the best-performing configuration in the bootstrap set in each iteration of the sampling process. We expect an effective algorithm to identify a high-performing configuration within a few iterations of bootstrapping. In particular, we also expect this best performance to be close to the global optimum in the parameter space, if not the best. 4 Evaluation Setup and Datasets In order to evaluate the proposed adaptive sampling approach, GEIST, and compare it with existing approaches, we autotune different types of parameters for optimizing performance metrics such as the execution time and the total energy consumed, of different benchmark applications. 4.1 Benchmarks and Parameter Sources We use a combination of benchmarks and multiple sources of parameters to create a diverse set of scenarios. In particular, we perform autotuning for compiler flags, application-specific parameters, and runtime options (e.g., OpenMP thread count, power cap). OpenAtom. OpenAtom [18] is a scalable Charm++-based [1] parallel simulation software for studying atomic, molecular, and condensed phase material systems based on quantum chemical principles. Similar to other Charm++ applications, OpenAtom allows end users to over-decompose the physical domain and the associated work/data units. In order to achieve high performance, it is critical to choose the right level of over-decomposition for different work/data units, and is the subject of our autotuning experiments. LULESH and compiler flags. LULESH is a shock hydro mini-app developed at Lawrence Livermore National Laboratory. It performs a hydrodynamics stencil calculation using both MPI and OpenMP to achieve parallelism. Among other features, LULESH stresses compiler vectorization, OpenMP overheads, and on node parallelism. Hence, we use LULESH to study and find compiler flags that improve the execution time for single-node runs. Hypre. Hypre [12] is a parallel linear solver library used in many production applications. It supports many solvers and smoothers, characterized by varying performance and scaling properties. new_ij is a test program that allows evaluation of these different options. In this work, we autotune these options and their associated parameters for solving the Laplacian test problem. Laplacian is a 3D Laplace problem discretized using a 27-point finite difference stencil. Kripke. Kripke is a proxy application for a production transport code for particle physics [21]. In order to enable exploration of novel architectures, it provides several input parameters that change the data structures and code flow, but do not impact the science output. In addition, it can be parallelized using OpenMP. We autotune all these parameters to optimize execution time as well as energy consumption in the presence of a tunable, hardware-enforced power bound. RAJA policies. RAJA [17] is an abstraction layer for defining looping regions of code that enables developers to easily modify the underlying implementation of different loops without having to rewrite their code. Instead of explicitly writing loops, developers use RAJA to define the body of a loop and its associated “policy”, which describes the loop iteration space, the runtime framework for executing it (e.g., sequential or SIMD), and the desired loop iteration order. We autotune parameters of the RAJA loop policies for six different loops in Kripke to optimize overall execution time. Table 1 summarizes the test cases we use in this paper. Each of these scenarios is discussed in detail in Section 5. 4.2 Distribution of Observed Performance Figure 3 presents the distribution of the observed performance for different datasets summarized in Table 1. We present these distributions in order to develop familiarity with the search space over which autotuning is being carried out. Note that GEIST, in general and for the results shown in Section 5, does not use any prior knowledge of performance distribution over the search space. Table 1: Parameter space and performance metric for the use cases explored. <table> <thead> <tr> <th>Application</th> <th>Metric</th> <th>Parameter type(s)</th> <th>Parameter space</th> </tr> </thead> <tbody> <tr> <td>LULESH</td> <td>Runtime</td> <td>Compiler flags, #OpenMP threads, system power cap</td> <td>4,800 - 25,920</td> </tr> <tr> <td>OpenAtom</td> <td>Runtime</td> <td>Decomposition flags, for electronic states, density, FFT, pair calculation etc.</td> <td>8,928</td> </tr> <tr> <td>Hypre</td> <td>Runtime</td> <td>Solver, smoother, coarsening scheme, interpolation operator</td> <td>4,580 - 25,198</td> </tr> <tr> <td>Kripke</td> <td>Runtime</td> <td>Application parameters, nesting order, group set, direction set, #OpenMP threads</td> <td>1,600</td> </tr> <tr> <td>Kripke</td> <td>Energy</td> <td>Application, system parameters, power cap, all of above</td> <td>17,815</td> </tr> <tr> <td>RAJA</td> <td>Runtime</td> <td>Loop policy, 6 loops: sequential, thread-parallel, nested parallelism strategy</td> <td>18,000</td> </tr> </tbody> </table> The evaluation cases that we present in this paper, and other datasets that we have studied, can be broadly divided into three categories. The first category of cases consists of many high-performing configurations. For example, execution times of OpenAtom and LULESH (Figures 3a,3b) over their corresponding parameter spaces exhibit heavily loaded bins on the left. It is interesting to note that, while the performance distribution for OpenAtom shows a single mode at lower execution times, LULESH exhibits a more complex distribution with multiple modes, but still contains strong modes at the bins to the left. For such distributions, it is relatively easy to find a few high-performing configurations because of their abundance. The second category of cases includes those with few samples close to best performance, followed by bins with higher occupancy, often containing configurations with moderately high performance. Results obtained for Hypre and Kripke (Figures 3d, 3e, 3f) are examples of such distributions (note the log-scale on the x-axis). For such scenarios, while finding a few good configurations is easy, identifying the configurations with the highest performance is hard. The last category is comprised of datasets that are heavily distributed to the right, i.e. they exhibit very few high-performing configurations and most of the configurations provide poor performance. Among our datasets, autotuning of RAJA policies, shown in Figure 3c, is one such scenario. This category is the most challenging in terms of finding high and/or good performing configurations. 4.3 Evaluation Methodology The performance metric values are always stored in a form where the information. The oracle simply reads the metric values for the configuration. In our evaluation, for efficiency reasons and for reducing the effect of external factors, we pre-run all configurations and store the information. The oracle simply reads the metric values for the configurations requested by the method from this key-value store. The performance metric values are always stored in a form where lower values are preferred. 4.3.1 Hyper-parameter Selection. All the adaptive sampling methods used in our evaluation, including GEIST, require the selection of four hyper-parameters: size of the initial sample set $N_0$, the thresholds on the performance metric for classifying a configuration as high-performing $\Delta_L$ and low-performing $\Delta_h$, and the number of samples to be added incrementally in each iteration $N_+$. In order to ensure statistical stability of the results, $N_0$ cannot be very small; hence for each dataset and method, we set $N_0 \sim 90$ configurations. For similar reasons, we set $N_+ \sim 50$ for all cases, except Kripke for which \( N_\ell = 16 \) because that dataset is relatively small. The choice of \( \Delta_\ell \) can depend on the type of application, parameters being tuned, and size of the parameter space. One would prefer to have a very low \( \Delta_\ell \) if the parameter space is large, or if one desires to aggressively search for only the very best configuration. However, it is prudent to set \( \Delta_\ell \) and \( N_\ell \) in a way that facilitates the models built for a dataset to provide enough samples for iteratively populating the configuration query list to the oracle. In order to avoid any bias towards a method or from past experience with the benchmarks, we choose \( \Delta_\ell \) to be the 5th percentile of the performance metrics from the initial sample set \( S_0 \) for all datasets. The choice of \( \Delta_\ell \) does not impact the sampling method and is used for evaluation purpose only. We set it to be the 90th percentile in the initial set, and measure how many extremely slow configurations, and hence experiments, can a method avoid. Finally, the number of iterations, which in practice should be determined by the number of experiments that can be run and the trend in the results obtained, is set to 8 for all methods; we intend to study the trends observed for different datasets and methods across iterations. ### 4.4 Competing Methods We now briefly describe the other configuration selection methods that we use for comparison in our experiments. 1. **Random Selection:** This is the simplest of all sampling strategies, where we add a random set of \( N_\ell \) samples in each iteration to the bootstrap set. While random sampling is expected to have a large variance, it can be particularly poor at finding good configurations using only a limited number of samples. 2. **Gaussian Process-based Adaptive Sampling:** This is a common sampling technique in UQ (Uncertainty Quantification) applications, where the samples to be added to the training set are chosen based on both the expected metric value and the prediction uncertainty from a Gaussian Process regressor. The intuition here is that predictions with a large variance lie in regions of high uncertainty. Hence, in each iteration, we add samples that are predicted to be high performing, as well as the ones with large variance, to improve the model in the subsequent iterations. 3. **CCA based Neighborhood Selection:** Similar to the approach in [13], we utilize canonical correlation analysis to learn a mapping \( V \) such that \( V^TX \) is maximally correlated with the performance metric \( y \), using the samples in the bootstrap set. In each iteration, we choose \( N_\ell \) nearest neighbors to the current best configuration and add them to the bootstrap set. 4. **Expert Choice:** We include performance against a manually determined near-optimal configuration by an expert practitioner. 5. **Exhaustive Search (Oracle):** In order to get a sense of how well we are able to find the optimal configuration(s), we also compare our method against the best performance that can be obtained on an application, that is found using an exhaustive search. ## 5 EVALUATION In this section, we evaluate and compare GEIST with other methods described in Section 4.4 on the benchmark datasets in Table 1. For each dataset, we perform 50 adaptive sampling experiments for every method, and report the observed mean and standard deviation for each of the metrics. For all methods and data sets, the same set of 50 random seeds was used for generating the initial sample sets. ### 5.1 Compiler Flags for LULESH Users often rely on the default choice of flags enabled by the -O3 flag to obtain the best performance that can be provided by a compiler. However, it has been shown that the default options enabled by -O3 may not be best-suited for every application, and performance can be gained by tuning the individual flags [7]. We autotune the compiler flags for LULESH as our first use case. Because we want to compare the best-performing configuration obtained by various methods with the exhaustive best, we limit our exploration to 9-10 compiler flags, so that exhaustive collection of data is possible. Some of the flags used are listed in Table 1. The runtime obtained with the -O3 flag is 6.02 seconds. Figure 4 compares the results obtained for autotuning using GEIST and other competing methods. The initial sample size for these experiments was 96, and 50 samples were added in every iteration. We observe that GEIST finds significantly more (~2.6x) high-performing configurations in comparison to other methods. GEIST also outperforms random selection and Gaussian Process based sampling in avoiding low-performing configurations, but CCA outperforms GEIST in that metric. All methods quickly find configurations close to the global optimum, which is not far away from the best configuration in the initial random sample set. This result can be explained by the distribution presented in Figure 3b, which shows that several high-performing configurations exist. Nonetheless, the best-performing configuration obtained from all methods is significantly (2.2×) faster than the typical default of -O3. We also performed similar experiments with three other sets of compiler flags for parameter space sizes up to 25,920. For all scenarios, we obtained data distributions and autotuning results similar to those presented above. However, the global best performance obtained heavily depends on the compiler flags being explored and ranges from 2.72s to 5.92s. Nonetheless, all methods are able to find configurations that perform close to the optimum, and GEIST finds significantly more high-performing configurations. ### 5.2 Decomposition Selection for OpenAtom In OpenAtom, users can decompose different tasks into different numbers of work units. This flexibility leads to a large parameter space, in which each configuration can take several minutes to execute. For the science problem simulated in this paper (32 molecules of Water on 128 nodes of a Blue Gene/Q [18]), an expert user would choose a configuration that takes 1.6 seconds per step. Figure 5 shows that, similar to LULESH, GEIST identifies significantly higher (4×) number of high-performing configurations in comparison to the other methods. Unlike other methods, GEIST also successfully avoids exploring low-performing configurations. However, like LULESH, the dataset of OpenAtom tested by us contains many high-performing configurations (Figure 3a) and hence most methods are able to quickly find near-optimal (within 3% of the global best) configurations in 2 to 3 iterations of adaptive sampling. Gaussian Process based sampling and GEIST requires the minimum number of samples (189) to find these configurations, while random selection performs the worst and needs 389 samples. ### 5.3 Solver Selection for HypeR The new_i_j benchmark of the hypere suite allows the use of four parameters: solver, smoother, coarsening scheme, and interpolation operator, which can create a parameter space of size 4,580. By also modifying the power bounds, this parameter space increases to up to 25,198. We autotuned parameters with and without including different power bounds, and achieved similar results for both, so henceforth we discuss the results without power bounds only. Figure 6 shows that, except random selection, all other methods are able to find many high-performing configurations. However, only GEIST is able to iteratively improve the performance of configurations found, thus determining configurations within 3% of the global best. These configurations found by GEIST are 5.6% and 9% better than the best configurations found by the next best methods, Gaussian Process and CCA, respectively. Moreover, it only takes 341 samples for GEIST to find the near-optimal configurations. GEIST is able to outperform other methods for hypere because it is able to identify the very few high-performing configurations that are in the left-most bins of Figure 3d. While other methods are able to only find the good configurations from heavily occupied bins, GEIST is able effectively to explore the neighborhoods of those configurations and find the near-optimal configurations. Percentile Score of Percentile Score of Percentile Score of Figure 7: Kripke time: GEIST outperforms all other methods and finds configurations that are within 19% and 10% of global best using 144 and 208 samples. The next best method is random selection which is 30% and 26% slower than the global best for these sample counts. Note that due to the small size of this dataset, only 16 samples are added in each iteration. Figure 8: Kripke energy: GEIST is significantly better at finding low-energy configurations, avoiding very high-energy configurations, and finds configurations that consume ∼ 9% lower energy than configurations found by other methods. 5.4 Kripke: Time and Energy Optimization In order to explore different architectural features and provide performance portability, Kripke provides several application-level options to change the code control flow without changing the science performed. Table 1 list these options: different orderings for executing compute kernels, number of group and energy sets to overlap computation and communication, and the OpenMP thread count. We explore this space to find configurations with minimum runtime. Additionally, by enabling power capping, we also search for configurations that minimize total energy consumption of the execution. An expert user’s choice in this benchmark would have been to manually test for each loop ordering with a few group/energy sets, and optimize for energy at $2^{nd}$, $3^{rd}$ highest power level. This would have resulted in the execution time of 15.2 seconds and energy consumption of 4,742 Joules. Figure 7 shows that GEIST outperforms all other methods comprehensively in finding configurations with low execution time, and is also better at avoiding configurations with high execution time. GEIST finds configurations that are within 19% and 10% of the globally optimal configuration of 8.43s using only 144 and 208 samples, respectively. These runtimes are significantly better than the runtimes obtained using random selection (27%), Gaussian Process (48%), and CCA (59%) methods, with a total sample size of 208. Similar results are obtained for optimizing energy consumption, as shown in Figure 8. GEIST finds significantly higher numbers of low-energy configurations (6x) and is also the best method for avoiding high-energy configurations. For any given iteration or sample count, GEIST finds best configurations that consume ∼ 9% lesser energy than best configurations discovered by other methods. The best configuration found by GEIST is within 4% of the global optimum of 2,533 Joules and needs only 2.5%(339) samples of the total parameter space. Like hypre, we believe that GEIST is able to improve upon other methods for finding the best-performing configurations because of the distribution of Kripke datasets (Figures 3e and 3f). GEIST uses the parameter graph neighborhood relations to explore the neighborhoods of high-performing configurations and find the few near-optimal configurations in the left-most bins. 5.5 Selecting RAJA policies Six different RAJA loops were used in our benchmark, five of which are nested loops with three to five nesting levels. The underlying loop policies for each of these loops can be chosen at runtime, and includes options to execute sequentially or with thread parallelism and to select the nesting level to invoke a parallel OpenMP region. Since different loop policies populate processor caches differently, we cannot tune loops independently and must explore the combined space of all policies and loops. An expert user would use OpenMP at the outermost level and obtain 57.2s runtime. Figure 9 compares the quality of configurations discovered by GEIST with other methods. With increasing iteration count and samples, we find that GEIST progressively gets better at selecting high-performing configurations while all other methods exhibit 6 DISCUSSION AND CONCLUSION Table 2 summarizes the evaluation results presented in this paper. Broadly speaking, we see that for all test cases, GEIST is able to find high-performing configurations that are closer to the global optimum with fewer samples in comparison to other methods. The method which is second best to GEIST varies with the dataset being tuned. Furthermore, because GEIST quickly finds more high-performing configurations than other methods, each training iteration becomes progressively cheaper to sample than the previous, thus speeding up the process towards convergence. An in-depth look at the optimal configurations selected revealed that often times, the configurations that provide the best performance are not intuitive, nor are they well-known to expert users. For example, in OpenAtom, the expert users tend to pick symmetric decompositions for multi-dimensional physical entities. However, significantly better performance is obtained using asymmetric decompositions (1.6s vs 1.26s). Similarly, for RAJA policies, experienced users expect OpenMP loop at outermost levels to work well, but we find that a complex combination of loop levels provides significantly better performance (57.28s vs 4.61s). Nonetheless, despite being unaware of the domain or parameter types, GEIST is able to find high-performing configurations after few sampling iterations. Finally, our study suggests that the difference between high-performing configurations chosen by GEIST and other methods increases as the distributions of performance metrics move to the right; i.e., when fewer high-performing configurations are available, GEIST is able to find them, but other methods do not. This is inherent in the design of GEIST, which uses sampling to intelligently avoid large parameter spaces with under-performing samples. In conclusion, we have presented and shown that an adaptive sampling strategy that is able to exploit neighborhood relationships among configurations in the parameter space is very good at finding near-optimal configurations with few samples. We hope that this scheme, which does not require information about the domain, metric distribution, or user input, will help the HPC community autotune its codes using minimal resources. REFERENCES Objects: Charm++ in Practice (SC).
{"Source-Url": "http://charm.cs.uiuc.edu/~bhatele/pubs/pdf/2018/ics2018.pdf", "len_cl100k_base": 9012, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40493, "total-output-tokens": 11573, "length": "2e13", "weborganizer": {"__label__adult": 0.00043654441833496094, "__label__art_design": 0.0006213188171386719, "__label__crime_law": 0.00036978721618652344, "__label__education_jobs": 0.0020427703857421875, "__label__entertainment": 0.00019931793212890625, "__label__fashion_beauty": 0.00028133392333984375, "__label__finance_business": 0.0004892349243164062, "__label__food_dining": 0.0004591941833496094, "__label__games": 0.000942707061767578, "__label__hardware": 0.001854896545410156, "__label__health": 0.000973224639892578, "__label__history": 0.0005993843078613281, "__label__home_hobbies": 0.00021076202392578125, "__label__industrial": 0.0008869171142578125, "__label__literature": 0.00046372413635253906, "__label__politics": 0.0004210472106933594, "__label__religion": 0.0007457733154296875, "__label__science_tech": 0.479248046875, "__label__social_life": 0.00017404556274414062, "__label__software": 0.01018524169921875, "__label__software_dev": 0.49658203125, "__label__sports_fitness": 0.0004582405090332031, "__label__transportation": 0.0010633468627929688, "__label__travel": 0.00029921531677246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50356, 0.02579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50356, 0.21822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50356, 0.87546]], "google_gemma-3-12b-it_contains_pii": [[0, 3702, false], [3702, 9560, null], [9560, 16248, null], [16248, 19844, null], [19844, 26110, null], [26110, 30010, null], [30010, 35107, null], [35107, 38298, null], [38298, 42199, null], [42199, 44706, null], [44706, 50356, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3702, true], [3702, 9560, null], [9560, 16248, null], [16248, 19844, null], [19844, 26110, null], [26110, 30010, null], [30010, 35107, null], [35107, 38298, null], [38298, 42199, null], [42199, 44706, null], [44706, 50356, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50356, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50356, null]], "pdf_page_numbers": [[0, 3702, 1], [3702, 9560, 2], [9560, 16248, 3], [16248, 19844, 4], [19844, 26110, 5], [26110, 30010, 6], [30010, 35107, 7], [35107, 38298, 8], [38298, 42199, 9], [42199, 44706, 10], [44706, 50356, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50356, 0.04372]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
22f21df6842ec6609d93ba1c68983c1be983c347
The Konsole Handbook Jonathan Singer Kurt Hindenburg Ahmad Samir Robert Knight Kurt Hindenburg Waldo Bastian Mike McBride # Contents 1 Introduction ................................. 6 1.1 What is a terminal? ........................................... 6 1.2 Scrollback .................................................. 6 1.3 Profiles .................................................... 6 1.4 Mouse Buttons ............................................. 7 1.5 Drag and Drop ............................................ 8 2 Command Reference ....................... 10 2.1 The Menubar ............................................ 10 2.1.1 File Menu ............................................... 10 2.1.2 Edit Menu ............................................... 11 2.1.3 View Menu ............................................... 12 2.1.4 Bookmarks Menu ........................................ 13 2.1.5 Settings Menu ........................................... 14 2.1.6 Help Menu ............................................... 14 2.2 Konsole Dialogs ......................................... 14 2.2.1 Rename Tab Dialog .................................... 14 2.2.2 Copy Input Dialog ..................................... 15 2.2.3 Adjust Scrollback Dialog .............................. 15 3 Command-line Options .................. 16 4 Scripting Konsole ......................... 18 5 Terminal Key Bindings .................. 19 5.1 How Konsole Uses Key Bindings ............ 19 5.1.1 Introduction .......................................... 19 5.1.2 Key Combinations and Modes ..................... 19 5.1.3 The Output Field ..................................... 21 5.1.4 Other System Resources ............................ 22 5.1.5 Further Reading ..................................... 22 6 Using Style Sheet for the Tab Bar ...... 23 # 7 Did You Know?, Common Issues and More - Did You Know? ........................................... 24 - Common Issues ......................................... 24 # 8 Credits and Copyright # A Links Abstract Konsole is KDE’s terminal emulator. Chapter 1 Introduction 1.1 What is a terminal? Konsole is an X terminal emulator, often referred to as a terminal or a shell. It emulates a command line interface in a text only window. Konsole typically runs a command shell, an application that executes commands that you type. The shell the Konsole runs depends on your account settings. Consult your operating system documentation to know what the shell is, how to configure it and how to use it. 1.2 Scrollback Konsole uses the notion of scrollback to allow users to view previously displayed output. By default, scrollback is on and set to save 1000 lines of output in addition to what is currently displayed on the screen. As lines of text scroll off the top of the screen, they can be reviewed by moving the scroll bar upwards, scrolling with a mouse wheel or through the use of the Shift+Page Up (to move back), Shift+Page Down (to move forward), Shift+Up Arrow (to move up a line) and Shift+Down Arrow (to move down a line) keys. The amount of scrolling using Shift+Page Up/Down can be switched between half and full page in the Scrolling tab of the profile configuration window (use Settings → Edit Current Profile... to open this window). 1.3 Profiles Profiles allow the user to quickly and easily automate the running of common commands. Examples could include: • ssh into another machine • starting an irc session • use tail to watch a file All new and changed profiles are saved in the user’s local home folder in $XDG_DATA_HOME /konsole. Procedure to create a new profile: 1. Click on the menu entry **Settings → Manage Profiles...** 2. Switch to the **Profiles** page. 3. Click on the button **New Profile...** 4. Fill in the first entry with a name. This is the name that will show in the menu, and will be the default label instead of **Shell** when you start a session of this type. 5. Enter a command just as you normally would if you opened a new shell and were going to issue that command. For our first example above, you might type `ssh administration`. 6. On the other tabs of the dialog, configure this session’s appearance. You can configure a different font, color scheme, `$TERM` type and many other settings for each session. 7. Press the **OK** button. The new session is now available in the **Manage Profiles...** dialog. Any profiles which have **Show in Menu** checked will be listed by their name in the **File → New Tab** menu. There will be no submenu if only the default profile is to be shown. ### 1.4 Mouse Buttons This section details the use of the mouse buttons for the common right handed mouse button order. For the left handed mouse button order, swap left and right in the text below. **Left** All left mouse button clicks will be sent to a mouse-aware application running in Konsole. If an application will react on mouse clicks, Konsole indicates this by showing an arrow cursor. If not, an I-beam (bar) cursor is shown. Holding the left mouse button down and dragging the mouse over the screen with a mouse-unaware application running will mark a region of the text. While dragging the mouse, the marked text is displayed in reversed color for visual feedback. Select **Copy** from the **Edit** menu to copy the marked text to the clipboard for further use within Konsole or another application. The selected text can also be dragged and dropped into compatible applications. Hold the **Ctrl** key and drag the selected text to the desired location. Normally, new-line characters are inserted at the end of each line selected. This is best for cut and paste of source code, or the output of a particular command. For ordinary text, the line breaks are often not important. One might prefer, however, for the text to be a stream of characters that will be automatically re-formatted when pasted into another application. To select in text-stream mode, hold down the **Ctrl** key while selecting normally. Pressing the **Ctrl** and **Alt** keys along with the left mouse button will select text in columns. Double-click with the left mouse button to select a word; triple-click to select an entire line. If the upper or lower edge of the text area is touched while marking, Konsole scrolls up or down, eventually exposing text within the history buffer. The scrolling stops when the mouse stops moving. After the mouse is released, Konsole attempts to keep the text in the clipboard visible by holding the marked area reversed. The marked area reverts back to normal as soon as the contents of the clipboard change, the text within the marked area is altered or the left mouse button is clicked. To mark text in a mouse-aware application (Midnight Commander, for example) the **Shift** key has to be pressed when clicking. **Middle** Pressing the middle mouse button pastes text currently in the clipboard. Holding down the **Ctrl** key as you press the middle mouse button pastes the text and appends a new-line. That is convenient for executing pasted command quickly, but it can be dangerous so use it with caution. NOTE If you have a mouse with only two buttons, pressing both the left mouse button and right mouse button together emulates the middle mouse button of a three button mouse. If you have a wheel as the middle button, rolling it in a mouse-unaware program will move Konsole’s scrollbar. Right These items appear in the menu when the right mouse button is pressed: - Copy - Paste - With a text selection a submenu **Search for** with a list of the preferred Web Shortcuts and an option to configure web shortcuts. - Open File Manager - Set Encoding - Clear Scrollback - Adjust Scrollback... - Show Menu Bar, only when the menubar is hidden - Switch Profile - Edit Current Profile... - Close Tab In a mouse aware application, press the **Shift** key along with the right mouse button to get the popup menu. 1.5 Drag and Drop If you drop a file, folder or URL on a Konsole window, a context menu appears with these actions: <table> <thead> <tr> <th>Action</th> <th>Key Combination</th> </tr> </thead> <tbody> <tr> <td>Move Here (Shift)</td> <td>Shift</td> </tr> <tr> <td>Copy Here (Ctrl)</td> <td>Ctrl</td> </tr> <tr> <td>Link Here (Ctrl-Shift)</td> <td>Ctrl+Shift</td> </tr> <tr> <td>Paste Location</td> <td>Shift+Ctrl</td> </tr> <tr> <td>Change Directory To</td> <td>Shift+Alt</td> </tr> <tr> <td>Cancel</td> <td>Esc</td> </tr> </tbody> </table> **Move Here (Shift)** Move the dropped item into the current folder. This item only appears in the context menu, if you have the rights to delete the dropped file or folder. **Copy Here (Ctrl)** Copy the dropped item into the current folder. **Link Here (Ctrl-Shift)** Insert a symbolic link to the dropped item. **Paste Location** Insert the full file path of the dropped item at the cursor. Change Directory To If a folder is dropped, this action appears in the context menu and allows you to change the working folder of the Konsole session. Cancel (Esc) Break the drag and drop action. If you press the shortcuts before releasing the left mouse button during drag and drop, no context menu appears and the actions will be executed immediately. If you want to use the Ctrl key for drag and drop or disable the context menu to insert URLs as text by default, enable the corresponding options on the Mouse tab in the profile settings dialog. Chapter 2 Command Reference 2.1 The Menubar The menubar is at the top of the Konsole window. If the menubar is hidden, Show Menu Bar can be reached by right clicking in the window (as long as no full screen application is running in that window such as vi, minicom, etc.). The default shortcut is listed after each menu item. Alternatively you can use the shortcut Ctrl+Shift+M to show or hide the menubar. 2.1.1 File Menu File → New Window (Ctrl+Shift+N) Opens a new separate Konsole window with the default profile File → New Tab (Ctrl+Shift+T) Opens a new tab with the default profile File → Clone Tab Attempts to clone the current tab in a new tab File → Save Output As... (Ctrl+Shift+S) Saves the current scrollback as a text or html file File → Print Screen ... (Ctrl+Shift+P) Print the current screen. By default the output is scaled to fit the size of the paper being printed on with black text color and no background. In the print dialog these options can be changed on the Output Options tab. File → Open File Manager Opens KDE’s file manager at the current directory. By default, that is Dolphin. File → Close Tab (Ctrl+Shift+W) Closes the current tab File → Close Window (Ctrl+Shift+Q) Quits Konsole **NOTE** Konsole will display a confirmation dialog if there is more than one tab open. This dialog can be disabled by clicking on the **Do not ask again** checkbox. If you want to get the confirmation dialog get back, delete the entry ``` [Notification Messages] CloseAllTabs=true ``` in `$XDG_CONFIG_HOME/konsolerc`. ### 2.1.2 Edit Menu **Edit → Copy (Ctrl+Shift+C)** Copies the selected text to the clipboard **Edit → Paste (Ctrl+Shift+V)** Pastes text from the clipboard at the cursor location **Edit → Select All** Selects all the text in current window **Edit → Copy Input To → All Tabs in Current Window** Allows input from the current session to be sent simultaneously to all sessions in current window **Edit → Copy Input To → Select Tabs... (Ctrl+Shift+.)** Allows input from the current session to be sent simultaneously to sessions picked by user **Edit → Copy Input To → None (Ctrl+Shift+/)** Stop sending input from current session into other sessions **Edit → Send Signal** Send the specified signal to the shell process, or other process, that was launched when the new session was started. Currently available signals are: <table> <thead> <tr> <th>Signal</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>STOP</td> <td>to stop process</td> </tr> <tr> <td>CONT</td> <td>continue if stopped</td> </tr> <tr> <td>HUP</td> <td>hangup detected on controlling terminal, or death of controlling process</td> </tr> <tr> <td>INT</td> <td>interrupt from keyboard</td> </tr> <tr> <td>TERM</td> <td>termination signal</td> </tr> <tr> <td>KILL</td> <td>kill signal</td> </tr> <tr> <td>USR1</td> <td>user signal 1</td> </tr> <tr> <td>USR2</td> <td>user signal 2</td> </tr> </tbody> </table> Refer to your system manual pages for further details by giving the command `man 7 signal`. The Konsole Handbook Edit → Rename Tab... (Ctrl+Alt+S) Opens a dialog box allowing you to change the name of the current tab (more info) Edit → ZModem Upload... (Ctrl+Alt+U) Opens up a dialog to select a file to be uploaded if the required software is installed Edit → Find... (Ctrl+Shift+F) Opens a search bar at the bottom of Konsole’s window This allows for case sensitive, forward or backwards, and regular expressions searches. Edit → Find Next (F3) Moves to the next search instance. If the search bar has the focus, you can use the shortcut Enter as well. Edit → Find Previous (Shift+F3) Moves to the previous search instance. If the search bar has the focus, you can use the shortcut Shift+Enter as well. 2.1.3 View Menu View → Split View → Split View Left/Right (Ctrl+()) Splits all the tabs into left and right views Any output on one view is duplicated in the other view. View → Split View → Split View Top/Bottom (Ctrl+)) Splits all the tabs into top and bottom views Any output on one view is duplicated in the other view. View → Split View → Close Active (Ctrl+Shift+X) Closes the current view View → Split View → Close Others (Ctrl+Shift+O) Closes all non-current views View → Split View → Expand View (Ctrl+Shift+) Makes the current view larger View → Split View → Shrink View (Ctrl+Shift-) Makes the current view smaller View → Detach Current Tab (Ctrl+Shift+L) Opens the current tab in a separate window Quitting the previous Konsole window will not affect the newly created window. View → Detach Current View (Ctrl+Shift+H) Opens the current split view in a separate window View → Monitor for Silence (Ctrl+Shift+i) Toggles the monitoring of the current tab for lack of activity By default, after 10 seconds of inactivity, an info icon will appear on the session’s tab. The type of alerts can be changed through Settings → Configure Notifications → Silence in monitored session. View → Monitor for Activity (Ctrl+Shift+A) Toggles the monitoring of the current tab for activity Upon any activity, an info icon will appear on the session’s tab. The type of alerts can be changed through Settings → Configure Notifications → Activity in monitored session. View → Read-only Toggles the session to be read-only: no input is accepted, drag and drop is disabled. View → Enlarge Font (Ctrl++) Increases the text font size View → Reset Font Size (Ctrl+0) Reset the text font size to the profile default View → Shrink Font (Ctrl+-) Decreases the text font size View → Set Encoding Sets the character encoding View → Clear Scrollback Clears the text in the scrollbar View → Clear Scrollback and Reset (Ctrl+Shift+K) Clears the text in the current tab and scrollback and resets the terminal 2.1.4 Bookmarks Menu Bookmarks → Add Bookmark (Ctrl+Shift+B) Adds the current location Bookmarks → Bookmark Tabs as Folder... Adds all tabs to a bookmark folder A dialog will open for the bookmark folder name. Bookmarks → New Bookmark Folder... Adds a new folder to the bookmark list A dialog will open for the bookmark folder name. Bookmarks → Edit Bookmarks Opens the bookmark editor NOTE The keditbookmarks program must be installed for this menu item to appear. You can use the bookmark editor to manually add URLs. Currently, Konsole accepts the following: • ssh://user@host:port • telnet://user@host:port 2.1.5 Settings Menu Settings → Edit Current Profile... Opens a dialog to configure current profile Settings → Switch Profile Switch current profile to a listed profile Settings → Manage Profiles... Opens a editor for managing profiles Settings → Show Menu Bar (Ctrl+Shift+M) Toggles the menubar being visible Settings → Full Screen Mode (F11) Toggles Konsole filling the entire screen Settings → Configure Shortcuts... Opens the keyboard shortcut editor. More on shortcuts configuration can be found in the KDE Fundamentals. Additionally Konsole has a few special shortcuts with no corresponding menu item: <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Shift+Right</td> <td>Next Tab</td> </tr> <tr> <td>Shift+Left</td> <td>Previous Tab</td> </tr> <tr> <td>Ctrl+Shift+Left</td> <td>Move Tab Left</td> </tr> <tr> <td>Ctrl+Shift+Right</td> <td>Move Tab Right</td> </tr> <tr> <td>Ctrl+Shift+Ins</td> <td>Paste Selection</td> </tr> <tr> <td>Shift+Tab</td> <td>Next View Container</td> </tr> </tbody> </table> Settings → Configure Notifications... Opens the notifications editor Settings → Configure Konsole... Opens the Konsole settings editor This dialog has options influencing the appearance and behaviour of the Tab Bar and general options for the Konsole window. 2.1.6 Help Menu Konsole has the some of the common KDE Help menu items, for more information read the section about the Help Menu of the KDE Fundamentals. 2.2 Konsole Dialogs 2.2.1 Rename Tab Dialog The name of the current tab can be changed from this dialog. The dialog can be displayed via the menu, the shortcut Ctrl+Alt+S or by double-clicking on the tab in the tab bar. These changes can be made permanent by editing the current profile. Konsole will substitute these tokens for local tabs: - **%n**: program name - **%d**: current directory (short) - **%D**: current directory (long) - **%h**: local host (short) - **%u**: user name - **%B**: user’s Bourne prompt sigil ($ = normal user, # = superuser) - **%w**: window title set by shell - **%#:** session number Konsole will substitute these tokens for remote tabs: - **%c**: current program - **%h**: remote host (short) - **%H**: remote host (long) - **%u**: user name - **%U**: user name@ (if given) - **%w**: window title set by shell - **%#:** session number Examples: - **%d**: %n with /usr/src as current directory and running bash will display *src : bash* - **%D**: %n with /usr/src as current directory and running top will display /usr/src : *top* - **%w (%#)** with ~ as current directory and running vim in the first tab will display [No Name] (~) - VIM(1) ### 2.2.2 Copy Input Dialog The text entered in one tab can simultaneously be sent to other tabs. This dialog allows you to select which tabs will get that input. The current tab will be greyed out. ### 2.2.3 Adjust Scrollback Dialog The [scrollback](#) options for the history size can be changed in this dialog. Any changes are for the current tab only and will not be saved to the profile. Chapter 3 Command-line Options When Konsole is started from the command line, various options can be specified to modify its behavior. --help List various options. --profile file Start Konsole using the specified profile instead of the default profile. --fallback-profile Use the internal FALLBACK profile. This option is a shortcut for --profile FALLBACK/. --workdir dir Open with dir as the initial working directory. --hold, --noclose Do not close the initial session automatically when it ends. --new-tab Create a new tab in an existing window rather than creating a new window. --tabs-from-file file Create tabs as specified in the given tabs configuration file. NOTE The file has one tab per line in the following format: Each line specifies a tab to open using up to 4 fields specifying how it is to open. Fields are delimited with ;; and a field name must have a : appended. Empty lines or lines with # at the beginning are ignored, so you can use line beginning with # to add comments. title: a name for this tab, tab default if blank or not specified workdir: working directory, ~ if blank or not specified profile: a Konsole profile to use, the default if blank or not specified command: a command to run Each line should contain at least one of command or profile field. Example: title: %n; command: /usr/bin/top ;; profile: Shell --background-mode Start Konsole in the background and bring to the front when Ctrl+Shift+F12 (by default) is pressed. The Konsole Handbook --separate, --nofork Run the new instance of Konsole in a separate process. --show-menubar Show the menubar, overriding the default behavior. --hide-menubar Hide the menubar, overriding the default behavior. --show-tabbar Show the tabbar, overriding the default behavior. --hide-tabbar Hide the tabbar, overriding the default behavior. --fullscreen Start Konsole in fullscreen mode. --notransparency Disable transparent backgrounds, even if the system supports them. --list-profiles List all available profiles. --list-profile-properties List all possible properties with name and type. See option -p. For more information, please visit Konsole API Reference. -p property=value Change the value of a profile property. -e command Execute command instead of the normal shell. **NOTE** This option will catch all following arguments passed to Konsole, and execute it as command. So this option should always be used as the last option. Konsole also accepts generic Qt™ and KDE Frameworks 5 options, see man pages qt5options and kf5options. Chapter 4 Scripting Konsole Konsole does support numerous methods that can be used with D-Bus. There are two ways to use the D-Bus interface: Qt™'s GUI qdbusviewer and the command line qdbus. Examples: • `% qdbus` will display all services available. • `% qdbus org.kde.konsole` will display the D-Bus interface for Konsole. • `% qdbus org.kde.konsole /Windows/1` will display methods for controlling window 1. • `% qdbus org.kde.konsole $KONSOLE_DBUS_WINDOW` will display methods for controlling the current window. • `% qdbus org.kde.konsole /Sessions/1` will display methods for controlling session 1. • `% qdbus org.kde.konsole $KONSOLE_DBUS_SESSION` will display methods for controlling the current session. • `% qdbus $KONSOLE_DBUS_SERVICE $KONSOLE_DBUS_SESSION` will display methods for controlling the current Konsole’s session. If any of the above commands outputs: Service ‘org.kde.konsole’ does not exist, change `org.kde.konsole` to one of the following: • `org.kde.konsole -p oneof$s_konsole` (will select first pid) • `$KONSOLE_DBUS_SERVICE` (this can be used from the current Konsole) • select one from the output of `qdbus | grep konsole` For more information, please visit D-Bus tutorial. Chapter 5 Terminal Key Bindings 5.1 How Konsole Uses Key Bindings 5.1.1 Introduction Konsole uses *.keytab files to translate key combinations into control characters and escape sequences that are sent to the shell or to interactive programs (typically programs that use the Alternate Screen buffer, e.g. vim, less, screen) running in the shell. Users can customize the key bindings settings in Konsole using the Key Bindings Editor. A key combination can be configured to send a specific control or escape sequence to the terminal. You can open the Key Bindings Editor from the menu entry Settings → Edit Current Profile, and going to the Keyboard tab. Listed there are the Key Bindings schemas that come by default with Konsole. 5.1.2 Key Combinations and Modes Key combinations follow the pattern: <table> <thead> <tr> <th>Key (+-)</th> <th>Modes</th> </tr> </thead> </table> for example: - Up+Shift+AppScreen - Down+Shift-AppScreen - Space+Ctrl Key names are defined in the qnamespace.h header file, with the ‘Qt::Key_’ prefix removed, for a list of key names check the Qt::Key enumeration in the Qt documentation. A ‘+’ preceding a Mode name means that mode is set; for a modifier key, that means it’s pressed, whereas for all other modes it means that particular mode is in effect (i.e. active). For example ‘+Ctrl’ means the key combination will work only if the Ctrl key is pressed. A ‘-’ preceding a Mode name means that mode is reset; basically this is the opposite of putting ‘+’ before a Mode name, so for a modifier key that means the key isn’t pressed, whereas for all other modes it means that particular mode is inactive. For example ‘-Ctrl’ means the key combination will work only if the Ctrl key is not pressed. The supported Key Bindings modes are listed below: **Alt, Ctrl, Shift** One or more of these Modes can be used in a key combination, if any of them is set, the key combination uses that modifier key, respectively; and vice versa if it’s reset **AnyModifier** If this mode is set, the key combination uses any modifier key (any of the previous three modifier keys); and vice versa if it’s reset **Ansi** If this mode is set, Konsole will send ANSI escape and control sequences If this mode is reset Konsole will send VT52 escape and control sequences **AppScreen** If this mode is set, the key combination will only affect interactive programs that use the Alternate Screen buffer If this mode is reset the key combination will only affect the terminal when it’s using the Normal Screen buffer **NOTE** Konsole makes use of two screen buffers: - The Normal Screen buffer (default): allows you to scroll back to view previous lines of output, this is the default buffer you usually use to execute commands... etc. - The Alternate Screen buffer: the terminal switches to this buffer when you run an interactive program (e.g. less, vim, screen, tmux... etc.) **KeyPad** If this mode is set, the key combination uses a key on the Keypad (Number Pad). This mode is useful to distinguish between keys on the keyboard and keys on the Keypad. For example when Num Lock is *on* you can configure two separate key combinations, one using the key labelled ‘1’ on the keyboard (usually under the F1 key) and the other using the key labelled ‘1’ on the Keypad. The same concept applies when Num Lock is *off* for the End, Home, Cursor Keys ...etc on the Keypad **AppCursorKeys** This mode implements the VT100 Cursor Keys Mode (DECCKM). It controls the escape sequences each Cursor Key (*Up*, *Down*, *Right*, *Left*) sends, depending on whether this mode is set or reset By default Konsole follows the XTerm behavior of treating the **Home** and **End** keys as cursor keys with respect to DECCKM **AppKeyPad** If this mode is set, the key combination will only work when the Keypad is in Application Mode (DECKPAM) If this mode is reset, the key combination will only work when the Keypad is in Numeric Mode (DECKPNM) The Konsole Handbook **NewLine** If this mode is set, the **Return** (Enter) key on the keyboard will send both Carriage Return `\r` and New Line `\n` control characters. If this mode is reset, the **Return** key will send only a Carriage Return `\r`. The same applies to the **Enter** key on the Keypad. This mode emulates the **LNM** - Line Feed/New Line Mode. Note that each combination of Key and Modes (set/reset) must be unique. For example, consider the following two rules: - A+Shift : ‘A’ - a : ‘a’ Konsole will *not* accept the small letter ‘a’ rule, you have to add a ‘-Shift’ to that rule to make it work. ### 5.1.3 The Output Field In the Output field you can add the escape sequences or control characters that you want Konsole to send to the terminal when the associated key combination is pressed. You can also use any of the following keywords, each of which has a special meaning in Konsole: - scrollUpLine : scroll up one line in the shell history scrollback buffer - scrollUpPage : scroll up one page in the shell history scrollback buffer - scrollDownLine : scroll down one line in the shell history scrollback buffer - scrollDownPage : scroll down one page in the shell history scrollback buffer - scrollUpToTop : scroll up to the beginning of the shell history scrollback buffer - scrollDownToBottom : scroll down to the end of the shell history scrollback buffer You can also use strings with C-string syntax; you may use the following escapes sequences: - \E : Escape - \\ : Backslash - \" : Double quote - \t : Tab - \r : Carriage Return - \n : New line - \b : Backspace - \xHH : where HH are two hex digits **Tip** This can be used to send ASCII control characters, e.g. `\00` which is the NUL character. 5.1.4 Other System Resources There are other system resources that can affect terminal Key Bindings: - Consult the terminfo or termcap database for the expected escape sequences and control characters that each key combination is supposed to send. - It is likely that your system has other keyboard databases which have to be in sync too, (e.g. /etc/inputrc and readline for the BASH shell) as they affect the operations (interactions) bound to key combinations. 5.1.5 Further Reading For more information on escape sequences and control characters, check the following documentation: - The VT100 user guide - The VT102 user guide - The comprehensive and indispensable XTerm Control Sequences documentation Chapter 6 Using Style Sheet for the Tab Bar The default style sheet for the tab bar sets the minimum and maximum tab widths. The user can create a .css file and have Konsole use that as the style sheet for the tab bar. In the .css file, the widget to use is `QTabBar::tab`. For more information, consider reading Qt Style Sheets Examples: - Change the selected tab’s background to a light gray ```css QTabBar::tab:selected { background: #999999 } ``` - Change the selected tab’s text to red ```css QTabBar::tab:selected { color: red } ``` - All tabs will be at least 200 pixels in width ```css QTabBar::tab { min-width: 200px } ``` - Only the selected tab will be at least 200 pixels in width ```css QTabBar::tab::selected { min-width: 200px } ``` - Any of these can be combined in one file ```css QTabBar::tab::selected { background: #999999; color: red; min-width: 200px; } QTabBar::tab { min-width: 100px } ``` Chapter 7 Did You Know?, Common Issues and More 7.1 Did You Know? - Pressing Ctrl while selecting text will cause lines breaks to be converted to spaces when pasted. - Pressing the Ctrl+Alt keys while selecting text will select columns. - The Ctrl+Wheel combination will zoom text size, like in konqueror and firefox. - When a program evaluates either mouse button, pressing the Shift key will allow the popup menu to appear. - The Ctrl+Shift+F10 shortcut will activate the menu. - The Shift+Insert keys will insert the clipboard. - Double-clicking will select a whole word. Continuing to hold the mouse button and moving the mouse will extend the selection. - Triple-clicking will select a whole line. Continuing to hold the mouse button and moving the mouse will extend the selection. - There is a hidden feature for the "%d" formatter in tab title. You can tell Konsole to abbreviate a directory name into its first character. For example, "/path/to/konsole/src" can be abbreviated into "konsole/s". If you want to enable and control this hidden feature, open konsolerc in qtpaths --paths GenericConfigLocation and add following lines: ```plaintext [ProcessInfo] CommonDirNames=name1,name2,name3... ``` **NOTE** If you are using Yakuake, you need to edit yakuakerc in qtpaths --paths GenericConfigLocation instead. 7.2 Common Issues - Some fonts might be unavailable for usage in Konsole, although they are available in other applications. That doesn’t mean there is a bug in Konsole. Konsole requires monospaced fonts to provide the best visual result, so it asks Qt™ to only list monospaced fonts. Starting with version 16.08 (August 2016), Konsole can be configured to allow selecting any font with the caveat that the display may not be correct. - Since KDE4 all the tabs use the same process ID. This has the side-effect that if one tab’s process has issues, all the other tabs may experience issues as well. This is most noticeable when a command that connects to an external device or system (ssh, nfs) has issues. - Konsole treats arguments after the \texttt{-e} option as one command and runs it directly, instead of parsing it and possibly dividing it into sub-commands for execution. This is different from xterm. \begin{itemize} \item \texttt{konsole -e \texttt{``command1 ; command2``}} does not work \item \texttt{konsole -e \$SHELL -c \texttt{``command1 ; command2``}} works \end{itemize} - Konsole doesn’t provide convenience for running login shell, because developers don’t like the idea of running login shell in a terminal emulator. Of course, users still can run login shell in Konsole if they really need to. Edit the profile in use and modify its command to the form of starting a login shell explicitly, such as \texttt{``bash -l``} and \texttt{``zsh -l``}. - The \texttt{--new-tab} option sometimes behaves strangely. It may create new window, or it may create new tab in another existing Konsole window instead of the current Konsole window. Those behaviors feel strange, but they are not necessarily bugs. The \texttt{--new-tab} option tries to reuse existing Konsole windows, but not all Konsole windows are reusable. All Konsole windows opened through KRunner are reusable, while most Konsole windows opened from command line are not. Chapter 8 Credits and Copyright Konsole is currently maintained by Kurt Hindenburg kurt.hindenburg@gmail.com Previous Konsole maintainers include: Robert Knight robertknight@gmail.com and Waldo Bastian bastian@kde.org The application Konsole Copyright (c) 1997-2008 Lars Doelle lars.doelle@on-line.de This document was originally written by Jonathan Singer jsinger@leeta.net This document was updated for KDE 4.x by Kurt Hindenburg kurt.hindenburg@gmail.com This document was updated for KDE 3.4 by Kurt Hindenburg kurt.hindenburg@gmail.com Originally converted to DocBook SGML by Mike McBride and Lauri Watts This documentation is licensed under the terms of the GNU Free Documentation License. This program is licensed under the terms of the GNU General Public License. Appendix A Links For more information please visit these websites: - Konsole’s homepage on KDE’s UserBase - Konsole’s homepage - Konsole’s mailing list - KDE on FreeBSD - KDE on Solaris
{"Source-Url": "https://docs.kde.org/stable5/en/applications/konsole/konsole.pdf", "len_cl100k_base": 8255, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 49447, "total-output-tokens": 9414, "length": "2e13", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.0006456375122070312, "__label__crime_law": 0.00020766258239746096, "__label__education_jobs": 0.0008769035339355469, "__label__entertainment": 0.0002008676528930664, "__label__fashion_beauty": 0.00011247396469116212, "__label__finance_business": 0.00011986494064331056, "__label__food_dining": 0.0001685619354248047, "__label__games": 0.0015707015991210938, "__label__hardware": 0.0017023086547851562, "__label__health": 0.00012958049774169922, "__label__history": 0.00021028518676757812, "__label__home_hobbies": 9.870529174804688e-05, "__label__industrial": 0.00013554096221923828, "__label__literature": 0.00030732154846191406, "__label__politics": 0.00013530254364013672, "__label__religion": 0.0004200935363769531, "__label__science_tech": 0.00311279296875, "__label__social_life": 0.0001392364501953125, "__label__software": 0.2900390625, "__label__software_dev": 0.69873046875, "__label__sports_fitness": 0.00014340877532958984, "__label__transportation": 0.00012385845184326172, "__label__travel": 0.0001627206802368164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34275, 0.01545]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34275, 0.14487]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34275, 0.74879]], "google_gemma-3-12b-it_contains_pii": [[0, 123, false], [123, 123, null], [123, 1899, null], [1899, 2104, null], [2104, 2150, null], [2150, 3701, null], [3701, 7201, null], [7201, 8918, null], [8918, 9471, null], [9471, 10647, null], [10647, 12512, null], [12512, 14425, null], [14425, 15849, null], [15849, 17458, null], [17458, 18796, null], [18796, 20306, null], [20306, 21419, null], [21419, 22632, null], [22632, 24341, null], [24341, 26555, null], [26555, 28303, null], [28303, 29015, null], [29015, 30014, null], [30014, 31624, null], [31624, 33313, null], [33313, 34087, null], [34087, 34275, null]], "google_gemma-3-12b-it_is_public_document": [[0, 123, true], [123, 123, null], [123, 1899, null], [1899, 2104, null], [2104, 2150, null], [2150, 3701, null], [3701, 7201, null], [7201, 8918, null], [8918, 9471, null], [9471, 10647, null], [10647, 12512, null], [12512, 14425, null], [14425, 15849, null], [15849, 17458, null], [17458, 18796, null], [18796, 20306, null], [20306, 21419, null], [21419, 22632, null], [22632, 24341, null], [24341, 26555, null], [26555, 28303, null], [28303, 29015, null], [29015, 30014, null], [30014, 31624, null], [31624, 33313, null], [33313, 34087, null], [34087, 34275, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34275, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34275, null]], "pdf_page_numbers": [[0, 123, 1], [123, 123, 2], [123, 1899, 3], [1899, 2104, 4], [2104, 2150, 5], [2150, 3701, 6], [3701, 7201, 7], [7201, 8918, 8], [8918, 9471, 9], [9471, 10647, 10], [10647, 12512, 11], [12512, 14425, 12], [14425, 15849, 13], [15849, 17458, 14], [17458, 18796, 15], [18796, 20306, 16], [20306, 21419, 17], [21419, 22632, 18], [22632, 24341, 19], [24341, 26555, 20], [26555, 28303, 21], [28303, 29015, 22], [29015, 30014, 23], [30014, 31624, 24], [31624, 33313, 25], [33313, 34087, 26], [34087, 34275, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34275, 0.05119]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
23435d1a4d1d7ce4d74136ec39ccf317f93d42b1
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-319-96142-2_24.pdf", "len_cl100k_base": 15283, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 64889, "total-output-tokens": 18155, "length": "2e13", "weborganizer": {"__label__adult": 0.0003857612609863281, "__label__art_design": 0.0003540515899658203, "__label__crime_law": 0.000461578369140625, "__label__education_jobs": 0.0005731582641601562, "__label__entertainment": 7.49826431274414e-05, "__label__fashion_beauty": 0.0001729726791381836, "__label__finance_business": 0.0002875328063964844, "__label__food_dining": 0.00040078163146972656, "__label__games": 0.00102996826171875, "__label__hardware": 0.0011377334594726562, "__label__health": 0.0005583763122558594, "__label__history": 0.0003170967102050781, "__label__home_hobbies": 0.0001156926155090332, "__label__industrial": 0.0005507469177246094, "__label__literature": 0.00031304359436035156, "__label__politics": 0.0003726482391357422, "__label__religion": 0.0005669593811035156, "__label__science_tech": 0.054351806640625, "__label__social_life": 9.185075759887697e-05, "__label__software": 0.006389617919921875, "__label__software_dev": 0.93017578125, "__label__sports_fitness": 0.0003466606140136719, "__label__transportation": 0.0007390975952148438, "__label__travel": 0.00023484230041503904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58394, 0.06703]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58394, 0.44206]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58394, 0.85733]], "google_gemma-3-12b-it_contains_pii": [[0, 1772, false], [1772, 4753, null], [4753, 8335, null], [8335, 10807, null], [10807, 14496, null], [14496, 16681, null], [16681, 20183, null], [20183, 23296, null], [23296, 26962, null], [26962, 30538, null], [30538, 33481, null], [33481, 36691, null], [36691, 39872, null], [39872, 43074, null], [43074, 46213, null], [46213, 50364, null], [50364, 53778, null], [53778, 56996, null], [56996, 58394, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1772, true], [1772, 4753, null], [4753, 8335, null], [8335, 10807, null], [10807, 14496, null], [14496, 16681, null], [16681, 20183, null], [20183, 23296, null], [23296, 26962, null], [26962, 30538, null], [30538, 33481, null], [33481, 36691, null], [36691, 39872, null], [39872, 43074, null], [43074, 46213, null], [46213, 50364, null], [50364, 53778, null], [53778, 56996, null], [56996, 58394, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58394, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58394, null]], "pdf_page_numbers": [[0, 1772, 1], [1772, 4753, 2], [4753, 8335, 3], [8335, 10807, 4], [10807, 14496, 5], [14496, 16681, 6], [16681, 20183, 7], [20183, 23296, 8], [23296, 26962, 9], [26962, 30538, 10], [30538, 33481, 11], [33481, 36691, 12], [36691, 39872, 13], [39872, 43074, 14], [43074, 46213, 15], [46213, 50364, 16], [50364, 53778, 17], [53778, 56996, 18], [56996, 58394, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58394, 0.18072]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
074481279796303974ae3cc2ad00002f094a7059
Classifying code comments in Java open-source software systems Luca Pascarella Delft University of Technology Delft, The Netherlands L.Pascarella@tudelft.nl Alberto Bacchelli Delft University of Technology Delft, The Netherlands A.Bacchelli@tudelft.nl Abstract—Code comments are a key software component containing information about the underlying implementation. Several studies have shown that code comments enhance the readability of the code. Nevertheless, not all the comments have the same goal and target audience. In this paper, we investigate how six diverse Java OSS projects use code comments, with the aim of understanding their purpose. Through our analysis, we produce a taxonomy of source code comments; subsequently, we investigate how often each category occur by manually classifying more than 2,000 code comments from the aforementioned projects. In addition, we conduct an initial evaluation on how to automatically classify code comments at line level into our taxonomy using machine learning; initial results are promising and suggest that an accurate classification is within reach. I. INTRODUCTION While writing and reading source code, software engineers routinely introduce code comments [6]. Several researchers investigated the usefulness of these comments, showing that thoroughly commented code is more readable and maintainable. For example, Woodfield et al. conducted one of the first experiments demonstrating that code comments improve program readability [35]; Tenny et al. confirmed these results with more experiments [31], [32]. Hartzman et al. investigated the economical maintenance of large software products showing that comments are crucial for maintenance [12]. Jiang et al. found that comments that are misaligned to the annotated functions confuse authors of future code changes [13]. Overall, given these results, having abundant comments in the source code is a recognized good practice [4]. Accordingly, researchers proposed to evaluate code quality with a new metric based on code/comment ratio [21], [9]. Nevertheless, not all the comments are the same. This is evident, for example, by glancing through the comments in a source code file[1] from the Java Apache Hadoop Framework [1]. In fact, we see that some comments target end-user programmers (e.g., Javadoc), while others target internal developers (e.g., inline comments); moreover, each comment is used for a different purpose, such as providing the implementation rationale, separating logical blocks, and adding reminders; finally, the interpretation of a comment also depends on its position with respect to the source code. Defining a taxonomy of the source code comments that developers produce is an open research problem. Haouari et al. [11] and Steidl et al. [28] presented the earliest and most significant results in comments’ classification. Haouari et al. investigated developers’ commenting habits, focusing on the position of comments with respect to source code and proposing an initial taxonomy that includes four high-level categories [11]; Steidl et al. proposed a semi-automated approach for the quantitative and qualitative evaluation of comment quality, based on classifying comments in seven high-level categories [28]. In spite of the innovative techniques they proposed to both understanding developers’ commenting habits and assessing comments’ quality, the classification of comments was not in their primary focus. In this paper, we focus on increasing our empirical understanding of the types of comments that developers write in source code files. This is a key step to guide future research on the topic. Moreover, this increased understanding has the potential to (1) improve current quality analysis approaches that are restricted to the comment ratio metric only [21], [9] and to (2) strengthen the reliability of other mining approaches that use source code comments as input (e.g., [30], [23]). To this aim, we conducted an in-depth analysis of the comments in the source code files of six major OSS systems in Java. We set up our study as an exploratory investigation. We started without hypotheses regarding the content of source code comments, with the aim of discovering their purposes and roles, their format, and their frequency. To this end, we (1) conducted three iterative content analysis sessions (involving four researchers) over 50 source files including about 250 comment blocks to define an initial taxonomy of code comments, (2) validated the taxonomy externally with 3 developers, (3) inspected 2,000 source code files and manually classified (using a new application we devised for this purpose) over 15,000 comment blocks comprising more than 28,000 lines, and (4) used the resulting dataset to evaluate how effectively comments can be automatically classified. Our results show that developers write comments with a large variety of different meanings and that this should be taken into account by analyses and techniques that rely on code comments. The most prominent category of comments summarizes the purpose of the code, confirming the importance of research related to automatically creating this type of comments. Finally, our automated classification approach reaches promising initial results. --- 1https://tinyurl.com/zqezgppq of the code/comment ratio proposed by Garcia et al., one of the easiest solutions consists in the evaluation of developers’ ability and code complexity; when well-written and comments. When comments are omitted, much depends on the quality of both code and comments. In a well-documented file, comments help maintenance, the aforementioned tasks become mandatory. When developers perform software code, knowing the choices and rationale of authors, and find-} when this metric considers only one kind of comment. More precisely, Garcia et al. focus only on the presence or absence of comments, omitting the possibility of use comments with different benefits for different end-users. Unfortunately, the previous sample of code represents a case where the author used comments for different purposes. The comment on line 31 represents a note that developers use to remember an activity, an improvement, or a fix. On line 20 the author marks his contribution on the file. Both these two comments represent real cases where the presence of comments increases the code/comment ratio without any real effect on code readability. This situation hinders the validity of this kind of metric and indicates the need for a more accurate approach to tackle the problem. B. An existing taxonomy of source code comments A great source of inspiration for our work comes from Steidl et al. who presented a first detailed approach for evaluating comment quality [23]. One of the key steps of their approach is to first automatically categorize the comments to differentiate between different comment types. They define a preliminary taxonomy of comments that comprises 7 high-level categories: COPYRIGHT, HEADER, MEMBER, INLINE, SECTION, CODE, and TASK. They provide evidence that their quality model, based on this taxonomy, provides important insights on documentation quality and can reveal quality defects in practice. The study of Steidl et al. demonstrates the importance of treating comments in a way that suits their different categories. However, the creation of the taxonomy was not the focus of their work, as also witnessed by the few details given about the process that led to its creation. In fact, we found a number of cases in which the categories did not provide adequate information or did not differentiate the type of comments enough to obtain a clear understanding. To further clarify this, we consider three examples taken from Listing 1: **Member category.** Lines 5, 6, 7 and 8 correspond to the MEMBER category in the taxonomy by Steidl et al. In fact, MEMBER comments describe the features of a method or field being located near to definition [23]. Nevertheless, we see that the function of line 6 differs from that of line 7; the former summarizes the purpose of the method, the latter gives notice about replacing the usage of the method with an alternative. By classifying these two lines together, one would lose this important difference. **IDE directives.** Lines 33 does not belong to any explicit category in the taxonomy by Steidl et al. In this case, the target is not a developer, but another stakeholder: the Integrated Development Environment (IDE). Similarly, line 23 does not have a category, while it is a possibly important external reference to read for more details. **Noise.** Line 36 represents a case of a comment that should be disregarded from any further analysis. Since it does not separate parts, the SECTION would not apply and an automated classification approach would try to wrongly assign it to one of the other categories. No sort of noise category is considered. With our work, we specifically focus on devising an empirically grounded, fine-grained classification of comments that expands on previous initial efforts. Our aim is to get a comprehensive view of the comments, by focusing on the purpose of the comments written by developers. Besides improving our scientific understanding of this type of artifacts, we expect this work to be also beneficial, for example, to the effectiveness of the quality model proposed by Steidl et al. and other approaches relying on mining and analyzing code comments (e.g., [21], [30], [23]). --- II. MOTIVATING EXAMPLE ```java public class STSubscriptExpression extends SExpression { private static CSpellingService instance; /** * Returns the created expression, or null in case of error. */ @Deprecated public STExpression getSubscriptExpression() { if (instance == null) { instance = new Expression(ConsoleEditors.getPreferenceStore()); } return instance; } /** * Handle terminated sub-launch. */ private void STLaunchTerminated(ILaunch launch) { if (this == launch) { // Remove sub launch, keeping the processes of the terminated launch to show the association and to keep the console content accessible if (subLaunches.remove(launch) == null) { // terminate ourselves if this is the last sub launch if (subLaunches.size() == 0) { // TODO: Check the possibility to exclude it // monitor validate() monitor.subTask("Terminated"); // SGNL-NLS-15 fTerminate = true; terminate(); // 90% // } } } } } ``` Listing 1. Example of Java file. III. METHODOLOGY This section defines the overall goal of our study, motivates our research questions, and outlines our research method. A. Research Questions The ultimate goal of this study is to understand and classify the primary purpose of code comments written by software developers. In fact, past research showed evidence that comments provide practitioners with a great assistance during maintenance and future development, but not all the comments are the same or bring the same value. We started analyzing past literature looking for similar efforts on analysis of code comments. We observed that only a few studies define a rudimentary taxonomy of comments and none of them provides an exhaustive categorization of all kinds of comments. Most of past work focuses on the impact of comments on software development processes such as code understanding, maintenance, or code review and the classification of comments is only treated as a side outcome (e.g., [31], [32]). Therefore, we set our first research question: RQ1. How can code comments be categorized? Given the importance of comments in software development, the natural next step is to apply the resulting taxonomy and investigate on the primary use of comments. Therefore, we investigate whether some classes of comments are predominant and whether there is a pattern across different projects. This investigation is reflected in our second research question: RQ2. How often does each category occur? Finally, we investigate to what extent an automated approach can classify unseen code comments according to the taxonomy defined in RQ1. An accurate automated classification mechanism is the first essential step in using the taxonomy to mine information from large-scale projects and to improve existing approaches that rely on code comments. This leads to our last research question: RQ3. How effective is an automated approach, based on machine learning, in classifying code comments? B. Selection of subject systems To conduct our analysis, we focused on a single programming language (i.e., Java) and on projects whose source code is publicly available, i.e., open-source software (OSS) projects. Particularly, we selected six heterogeneous software systems: Apache Spark, Eclipse CDT, Google Guava, Apache Hadoop, Google Guice, and Vaadin. They are all open source projects and the history of the changes are controlled with GIT version control system. Table I details the selected systems. We select unrelated projects emerging from the context of different four software ecosystems (i.e., Apache, Google, Eclipse, and Vaadin); the development environment, the number of contributors, and the project size are different, thus mitigating some threats to the external validity. <table> <thead> <tr> <th>Project</th> <th>Java source lines</th> <th>Commits</th> <th>Contributors</th> <th>Sample sets</th> </tr> </thead> <tbody> <tr> <td>Apache Spark</td> <td>73.5k</td> <td>24.7k</td> <td>39k</td> <td>61</td> </tr> <tr> <td>Eclipse CDT</td> <td>1.239k</td> <td>466k</td> <td>26k</td> <td>1,252</td> </tr> <tr> <td>Google Guava</td> <td>252k</td> <td>88k</td> <td>4k</td> <td>158</td> </tr> <tr> <td>Apache Hadoop</td> <td>1.255k</td> <td>396k</td> <td>15k</td> <td>672</td> </tr> <tr> <td>Google Guice</td> <td>9k</td> <td>5k</td> <td>2k</td> <td>59</td> </tr> <tr> <td>Vaadin</td> <td>2.643k</td> <td>1.101k</td> <td>91k</td> <td>401</td> </tr> </tbody> </table> C. Categorization of code comments To answer our first research question about categorizing code comments, we conducted three iterative content analysis sessions involving 4 software engineering researchers (3 Ph.D. candidates and 1 faculty member) with at least 3 years of programming experience. Two of these researchers are authors of this paper. In the first iteration, we started choosing 6 appropriate projects (reported in Table I) and sampling 35 files with a large variety of code comments. Subsequently, together we analyzed all source code and comments. During this analysis we could define some obvious categories and left undecided some comments; this resulted in the first draft taxonomy defining temporary category names. In the course of the second phase, we first conducted an individual work analyzing 10 new files, in order to check or suggest improvements to the previous taxonomy, then we gathered together to discuss the findings. The second phase resulted in a validation of some clusters in our draft and the redefinitions of others. The third phase was conducted in team and we analyzed 5 files that were previously unseen. During this session we completed the final draft of our taxonomy verifying that each kind of comments we encountered was covered by our definitions and those overlapping categories were absent. Through this iterative process, we defined a taxonomy having a hierarchy with two layers. The top layer consists of 6 categories and the inner layer consists of 16 subcategories. Validation. We externally validated the resulting taxonomy with 3 professional developers having 3 to 5 years of Java programming experience. We conducted one session with each developer. At the beginning of the session, the developer received a printed copy of the description of the comment categories in our taxonomy (similar to the explanation we provide in Section IV-A) and was allowed to read through it and ask questions to the researcher guiding the session. Afterwards, the developer was required to login into COMM-EAN (a web application, described in Section III-D) that we devised for this task and to facilitate the large-scale manual classification necessary to answer RQ2 and RQ3) and classify each comment in 3 Java source code files (the same files have been used for all the developers), according to the provided taxonomy. During the classification, the researcher was not in the experiment room, but the printed taxonomy could be consulted. At the end of the session, the guiding researcher came back to the experiment room and asked the participant to comment on the taxonomy and the classification task. At the end of all three sessions, we compared the differences (if any) among the classifications that the developers produced. All the participants found the categories clear and the task feasible; however, they also reported the need for consulting the printed taxonomy several times during the session to make sure that their choice was in line with the description of the category. The analysis of the three sets of answers showed a few minor differences with an agreement above 92%. The differences were all within the same top category and mostly regarding where the developers split certain code blocks into two sub-categories. D. A dataset of categorized code comments, publicly available To answer the second research question about the frequencies of each category, we needed a statistically significant set of code comments classified accordingly to the taxonomy produced as an answer to RQ1. Sampling approach. Since the classification had to be done manually, we relied on random sampling to produce a statistically significant set of code comments from each one of the six OSS projects we considered in our study. To establish the size of such sample sets, we used as a unit the number of files, rather than number of comments: This results in sample sets that give a more realistic overview of how comments are distributed in a system. In particular, we established the size \( n \) of such set with the following formula [33]: \[ n = \left( \frac{N \cdot \hat{p} \cdot \hat{q} \cdot (\alpha/2)}{\left( N - 1 \right) \cdot E^2 + \hat{p} \cdot \hat{q} \cdot (\alpha/2)} \right)^2 \] The size has been chosen to allow the simple random sampling without replacement. In the formula \( \hat{p} \) is a value between 0 and 1 that represents the proportion of files containing a specific block of code comment, while \( \hat{q} \) is the proportion of files not containing such kind of comment. Since the \textit{a-priori} proportion of \( \hat{p} \) is not known, we consider the worst case scenario where \( \hat{p} \cdot \hat{q} = 0.25 \). In addition, considering we are dealing with a small population (i.e., 557 Java files for Google Guice project) we use the finite population correction factor to take into account their size \( N \). We sample to reach a confidence level of 95% and error \( E \) of 5% (i.e., if a specific comment entity is present in \( f \% \) of the files in the sample set, we are 95% confident it will be in \( f \% \pm 5 \% \) files of our population). The suggested value for the sample set is 1,925 files. In addition, since we split the sample sets in two parts with an overlapped chunk for validation, we finally sampled 2,000 files. This value does not change significantly the error level that remains close to 5%. This choice only validates the quality of our dataset as a representation of the overall population: It is not related to the \textit{precision} and \textit{recall} values presented later, which are actual values based on manually analyzed elements. Manual classification. Once the sample of files with comments was selected, each of them had to be manually classified according to our taxonomy. To facilitate this error-prone and time-consuming task, we build a web application, named \textsc{ComMean}. Figure 1 shows the main page of \textsc{ComMean}, which comprises the following components: - The \textbf{Actions} panel (1) handles the authentication of the users and several actions such as ‘start’, ‘suspend’, or ‘send classification’. In addition, the panel keeps the user updated on the status of the classification showing the path of the resource loaded in the application and the progress with the following syntax: \( I-P/T \). Where \( I \) represents the current index, \( P \) is the progress, and \( T \) is the total number of files to be processed. - The \textbf{Annotation} panel (2) allows the user to append a pre-defined label to the selected text or define a new label. It enables the possibility to append a free text comment, create a link between comments and code, or categorize text composed of multiple parts. In addition, two keyboard shortcuts help the user to append the current label to selected text and create a link between source code and comments. - The \textbf{Source view} panel (3) is the main view of the application. It contains the Java source file with highlighted syntax to help users during the classification and increase the quality of the analysis. In addition, the processed parts of the file are marked with different colors. - The \textbf{Status} panel (4) shows the progress of the current file. A dynamic table is created when a new comment is added. A row of the table contains the initial position, the final position, the label used in the categorization, a summary of how many parts compose it, and a summary of linked code (if any). Clicking on rows, the correspondent text is highlighted and using the delete button the user is able to cancel a wrong classification. - The \textbf{Selection} panel (5) shows details such as selected test, initial position, final position, and length of the text. The two authors of this paper manually inspected the sample set composed of 2,000 files. One author analyzed 100% of these files, while another analyzed a random, overlapping subset comprising 10% of the files. These overlapped files were used to verify their agreement, which, similarly to the external validation of the taxonomy with professional developers (Section III-C), highlighted only negligible differences. Moreover, this large-scale categorization also confirmed the exhaustiveness of the taxonomy created in RQ1: None of the annotators felt that comments, or parts of the comments, should have been classified by creating a new category. Finally, the two researchers annotated, when present, any link between comments and the code they are referring to. This allows the use of our dataset for future approaches that attempt to recover the traceability links between code and comments. We make our dataset publicly available [24]. E. Automated classification of source code comments In the third research question we set to investigate to what extent and with which accuracy source code comments can be automatically categorized according to the taxonomy resulting from the answer to RQ1. Employing sophisticated classification techniques (e.g., based on deep learning approaches [10]) to accomplish this task goes beyond the scope of the current work. Our aim is to two-fold: (1) Verifying whether it is feasible to create an automatic classification approach that provides fair accuracy and (2) defining a reasonable baseline against which future methods that aim at a more accurate, project-specific classification can be tested. Classification granularity. We set the automated classification to work at line level. In fact, from our manual classification, we found several blocks of comments that had to be split and classified into different categories (similarly to the block defined in the lines 5–8 in Listing 1) and in the vast majority of the cases (96%), the split was at line level. In only less than 4% of the cases, one line had to be classified into more than one category. In these cases, we replicated the line in our dataset for each of the assigned categories, to get a lower bound on the effectiveness in these cases. Classification technique. Having created a reasonably large dataset to answer RQ2 (it comprises more than 15,000 comment blocks totaling over 30,000 lines), we employ supervised machine learning [8] to build the automated classification approach. This kind of machine learning uses a pre-classified set of samples to infer the classification function. In particular, we tested two different classes of supervised classifiers: (1) probabilistic classifiers, such as naive Bayes or naive Bayes Multinominal, and (2) decision tree algorithms, such as J48 and Random Forest. These classes make different assumptions on the underlying data, as well as have different advantages and drawbacks in terms of execution speed and overfitting. Classification evaluation. To evaluate the effectiveness of our automated technique to classification code comments into our taxonomy, we measured two well known Information Retrieval (IR) metrics for the quality of results [18], named precision and recall: $$\text{Precision} = \frac{|TP|}{|TP + FP|}$$ $$\text{Recall} = \frac{|TP|}{|TP + FN|}$$ $TP$, $FP$, and $FN$ are based on the following definitions: - **True Positives** ($TP$): elements that are correctly retrieved by the approach under analysis (i.e., comments categorized in accord to annotators) - **False Positives** ($FP$): elements that are wrongly classified by the approach under analysis (i.e., comments categorized in a different way by the oracle) - **False Negatives** ($FN$): elements that are not retrieved by the approach under analysis (i.e., comments present only in the oracle) The union of $TP$ and $FN$ constitutes the set of correct classifications for a given category (or overall) present in the benchmark, while the union of $TP$ and $FP$ constitutes the set of comments as classified by the used approach. In other words, precision represents the fraction of the comments that are correctly classified into a given category, while recall represents the fraction of correct comments in that category. **F. Threats to validity** **Sample validity.** One potential criticism of a scientific study conducted on a small sample of projects is that it could deliver little knowledge. In addition, the study highlights the characteristics and distributions of 6 open source frameworks mainly focusing on developers practices rather than end-users patterns. Historical evidence shows otherwise: Flyvbjerg gave many examples of individual cases contributing to discoveries in physics, economics, and social science [7]. To answer to our research questions, we read and inspected more than 28,000 lines of comments belonging to 2,000 Java files (see Section [II-D]) written by more than 3,000 contributors in 6 different projects (in accord to Table [1]). We also chose projects belonging to four open-source software ecosystems and with different development environments, number of contributors, and size of the project. **Taxonomy validity.** To ensure that the comments categories emerged from our content analysis sessions were clear and accurate, and to evaluate whether our taxonomy provides an exhaustive and effective way to organize source code comments, we conducted a validation session that involved three experienced developers (see Section [III-C]) external to content analysis sessions. These software engineers conducted an individual session on 3 unrelated Java source files. They observed that categories were clear and the task feasible, and the analysis of the three sets of answers showed a few minor differences with an agreement above 92%. In addition, we reduce the impact of human errors during the creation of the dataset by developing COMMEAN, a web application to assist the annotation process. **External validity.** Threats come with the generalization of our results. The proposed approach may show different result on different target systems. To reduce this limitation we selected 6 projects with unrelated characteristics and with different size in term of contributors and number of lines. To judge the generalizability we tested our results simulating this circumstance using the project cross validation. Similarly, another threat concerning the generalizability is that our taxonomy refers only to a single object-oriented programming language i.e., Java. However, since many object-oriented languages descend to common ancestor languages, many functionalities across object-oriented programming are similar and it is reasonable to expect the same to happen for their corresponding comments. Further research can be designed to investigate whether our results hold in other programming paradigms. **IV. RESULTS AND ANALYSIS** In this section, we present and analyze the results of our research questions aimed at understanding what developers write in comments and with which frequency, as well as at evaluating the results of an automated classification approach. **A. RQ1. How can code comments be categorized?** Our manual analysis led to the creation of a taxonomy of comments having a hierarchy with two layers. The top level categories gather comments with similar overall purpose, the internal levels provide a fine-grained definition using explanatory names. The top level categories are composed of 6 distinct groups and the second level categories are composed of 16 definitions. We now describe each category with the corresponding subcategories. **A. PURPOSE** The PURPOSE category contains the code comments used to describe the functionality of linked source code either in a shorter way than the code itself or in a more exhaustive manner. Moreover, these comments are often written in a pure natural language and are used to describe the purpose or the behavior of the referenced source code. The keywords ‘what’, ‘how’ and ‘why’ describe the actions that take place in the source code in SUMMARY, EXPAND, and RATIONALE groups, respectively, which are the subcategories of PURPOSE: **A.1 SUMMARY:** This type of comments contains a brief description of the behavior of the source code referenced. To highlight this type of comments the question word ‘what’ is used. Intuitively, this category incorporates comments that represent a sharp description of what the code does. Often, this kind of comments is used by developers to provide a summary that helps understanding the behavior of the code without reading it. **A.2 EXPAND:** As with the previous category, the main purpose of reading this type of comment is to obtain a description of the associated code. In this case, the goal is to provide more details on the code itself. The question word ‘how’ can be used to easily recognize the comments belonging to this category. Usually, these comments explain in detail the purpose of short parts of the code, such as details about a variable declaration. **A.3 RATIONALE:** This type of comment is used to explain the rationale behind some choices, patterns, or options. The comments that answer the question ‘why’ belong to that category (e.g., “Why does the code use that implementation?” or “Why did the developer use this specific option?”). **B. NOTICE** The NOTICE category contains the comments related to the description of warning, alerts, messages, or in general, functionalities that should be used with care. It also includes the reasons and the explanation of some developers’ choices. In addition, it covers the description of the adopted strategies to solve a bug, improve performance, prevent fault, etc. Further, it covers the use case examples giving to developer additional advice over parameters or options. Moreover, it covers examples of use cases or warnings about exceptions. B.1 Deprecation: This type of comments contains explicit warnings used to inform the users about deprecated interface artifacts. This subcategory contains comments related to alternative methods or classes that should be used (e.g., “do not use [this]”, “is it safe to use?” or “refer to: [ref]”). It also includes the description of the future or scheduled deprecation to inform the users of candidate changes. Sometimes, a tag comment such as @version, @deprecated, or @since is used. B.2 Usage: This type of comments regards explicit suggestions to users that are planning to use a functionality. It combines pure natural language text with examples, use cases, snippets of code, etc. Often, the advice is preceded by a metadata mark e.g., @usage, @param or @return. B.3 Exception: This category describes the reasons for the occurred exception. Sometimes it contains potential suggestions to prevent the unwanted behavior or actions to do when that event arise. Some tags are used also in this case, such as @throws and @exception. C. Under development The Under development category covers the topics related to ongoing and future development. In addition, it envelopes temporary tips, notes, or suggestions that developers use during development. Sometimes informal requests of improvement or bug correction may also appear. C.1 TODO: This type of comments regards explicit actions to be done or remarks both for the owners of the file and for other developers. It contains explicit fix notes about bugs to analyze and resolve, or already treated and fixed. Furthermore, it references to implicit TODO actions that may be potential enhancements or fixes. C.2 Incomplete: This type comprises partial, pending or empty comment bodies. It may be introduced intentionally or accidentally by developers and left in the incomplete state for some reason. This type may be added automatically by the IDE and not filled in by the developer e.g., empty “@param” or “@return” directives. C.3 Commented code: This category is composed of comments that contain source code commented out by developers. It envelopes functional code in a comment to try hidden features or some work in progress. Usually, this type of comments represents features under test or temporarily removed. The effect of this kind of comments is directly transposed on the program flow. D. Style & IDE The Style & IDE category contains comments that are used to logically separate the code or provide special services. These comments may be added automatically by the IDE or used to communicate with it. D.1 Directive: This is an additional text used to communicate with the IDE. It is in form of comments to be easily skipped by the compiler and it contains text of limited meaning to human readers. These comments are often added automatically by the IDE or used by developers to change the default behavior of the IDE or compiler. D.2 Formatter: This type of comments represents a simple solution adopted by the developers to separate the source code in logical sections. The occurrence of patterns or the repetition of symbols is a good hint at the presence of a comment in the formatter category. E. Metadata The Metadata category aims to classify comments that define meta information about the code, such as authors, license, and external references. Usually, some specific tags (e.g., “@author”) are used to mark the developer name and its ownership. The license section provides the legal information about the source code rights or the intellectual property. E.1 License: Generally placed on top of the file, this type of comments describes the end-user license agreement, the terms of use, the possibility to study, share and modify the related resource. Commonly, it contains only a preliminary description and some external references to the complete policy agreement. E.2 Ownership: These comments describe the authors and the ownership with different granularity. They may address methods, classes or files. In addition, this type of comments includes credentials or external references about the developers. A special tag is often used e.g., “@author”. E.3 Pointer: This types of comments contains references to linked resources. The common tags are: “@see”, “@link” and “@url”. Other times developers use custom references such as “FIX #2611” or “BUG #82100” that are examples of traditional external resources. F. Discarded This category groups the comments that do not fit into the categories previously defined; they have two flavors: F.1 Automatically generated: This category defines auto-generated notes (e.g., “Auto-generated method stub”). In most case, the comment represents the skeleton with a comment’s placeholder provided by the IDE and left untouched by the developers. F.2 Noise: This category contains all remaining comments that are not covered by the previous categories. In addition, it contains the comments whose meaning is hard to understand due to their poor content (e.g., meaningless because out of context). B. RQ2. How often does each category occur? The second research question investigates the occurrence of each category of comments in the 2,000 source files that we manually classified from our 6 OSS subject projects. Figure 2. Frequencies of comments per category. Top, red bars show the occurrences by blocks of comments and bottom, blue bars by lines. Figure 2 shows the distribution of the comments across the categories; it reports the cumulative value for the top level categories (e.g., NOTICE) and the absolute value for the inner categories (e.g., EXCEPTION). For each category, the top red bar indicates the number of blocks of comments in the category, while the bottom blue bar indicates the number of non-blank lines of comments in the category. Comparing blocks and lines, we see that, unsurprisingly, the longest type of comments is LICENSE, with more than 11 lines on average per block. The EXPAND category follows with a similar average length. The SUMMARY category has only an average length of 1.4 lines, which is surprising, since it is used to describe the purpose of possibly very long methods, variables, or blocks of code. The remaining categories show negligible differences between number of blocks and lines. We consider the quality metric code/comment ratio, which was proposed at line granularity [21, 9], in the light of our results. We see that 59% of lines of comments should not be considered (i.e., categories from C to F), as they do not reflect any aspect of the readability and maintainability of the code they pertain to; this would significantly change the results. On the other hand, if one considers blocks of comments, the result would be closer to the aspect that is set to measure with the code/comment metric. In this case, a simple solution would be to only filter out the METADATA category, because the other categories seem to have a more negligible impact. Considering the distribution of the comments, we see that the SUMMARY subcategory is the most prominent one. This confirms the value of research efforts that attempt to generate summaries for functions and methods automatically, by analyzing the source code [20]. In fact, these methods would alleviate developers from the burden of writing a significant amount of the comments we found in source code files. On the other hand, the SUMMARY accounts for only 24% of the overall lines of comments, thus suggesting that they only give a partial picture on the variety and role of this type of documentation. The second most prominent category is USAGE. Together with the prominence of SUMMARY, this suggests that the comments in the systems we analyzed are targeting end-user developers more frequently than internal developers. This is also confirmed by the low occurrence of the UNDER DEVELOPMENT category. Concerning UNDER DEVELOPMENT, the low number of comments in this category may also indicate that developers favor other channels to keep track of tasks to be done in the code. Finally, the variety of categories of comments and their distribution underlines once more the importance of a classification effort before applying any analysis technique on the content and value of code comments. The low number of discarded cases corroborates the completeness of the taxonomy proposed in RQ1. C. RQ3. How effective is an automated approach, based on machine learning, in classifying code comments? To evaluate the effectiveness of machine learning algorithm in classifying code comments we employed a supervised learning method. Supervised machine learning bases the decision evaluating on a pre-defined set of features. Since we set to classify lines of code comments, we computed the features at line granularity. Text preprocessing. We preprocessed the comments by doing the following actions in this order: (1) tokenizing the words on spaces and punctuation (except for words such as ‘@usage’ that would remain compounded), (2) splitting identifiers based on camel-casing (e.g., ‘ModelTree’ became ‘Model Tree’), (3) lowercasing the resulting terms, (4) removing numbers and rare symbols, and (5) creating one instance per line. Feature creation. Table II shows some of the features we devised and all that appear in the final model. Due to the optimal set of features is not known a priori, we started with some simple, traditional features and iteratively experimented with others more sophisticated, in order to improve precision and recall for all the projects we analyzed. A set of features commonly used in text recognition [24] consists in measuring the occurrence of the words; in fact, words are the fundamental tokens of all languages we want to classify. To avoid overfitting to words too specific to a project, such as code identifiers, we considered only words above a certain threshold \( t \). This value has been found experimentally, we started with a minimum of 3 increasing up to 10. Since the values around 7 do not change the precision and recall quality, we chose that threshold. In addition, other features consider the information about the context of the line, such as the text length, the comment position in the whole file, the number of rows, the nature of the adjacent rows, etc. The last set of features is category specific. We defined regular expressions to recognize specific patterns. We report three detailed examples: - This regular expression is used to match comments in single line or multiple lines with empty body. ```regex``` ^\s*/\*(\*\|\s*)*(\/\*\|\s*\*\*/\*\//)\n\*$ ``` - This regular expression matches the special keywords used in the Usage category. ```regex``` (?i)@param|@usage|@since|@value|@return ``` - The following regular expression is used to find patterns of symbols that may be used in Formattet category. ```regex``` ([^\*\s])(\{1,1}|\s*\//\//|\s*\//\//\s*\//\//\//\s*\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\/ Table III RESULTS OF THE CLASSIFICATION WITH NAIVE BAYES MULTINOMIAL CLASSIFIER <table> <thead> <tr> <th>Top categories</th> <th>P = Precision</th> <th>R = Recall</th> <th>Validation</th> <th>Cross project</th> </tr> </thead> <tbody> <tr> <td></td> <td>10-fold</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>CDF</td> <td>Guava</td> <td>Gucce</td> <td>Hadoop</td> </tr> <tr> <td>Purpose</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Summary</td> <td>0.88</td> <td>0.96</td> <td>0.68</td> <td>0.61</td> </tr> <tr> <td>Expand</td> <td>0.82</td> <td>0.99</td> <td>0.61</td> <td>0.69</td> </tr> <tr> <td>Rational</td> <td>1.00</td> <td>0.84</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>Purpose</td> <td>0.98</td> <td>0.64</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>Notice</td> <td>0.50</td> <td>0.56</td> <td>0.15</td> <td>0.00</td> </tr> <tr> <td>Deprecation</td> <td>0.69</td> <td>0.84</td> <td>0.23</td> <td>0.00</td> </tr> <tr> <td>Usage</td> <td>0.99</td> <td>0.77</td> <td>0.77</td> <td>0.81</td> </tr> <tr> <td>Exception</td> <td>0.99</td> <td>0.99</td> <td>0.98</td> <td>0.98</td> </tr> </tbody> </table> V. RELATED WORK A. Information Retrieval Technique Lawrie et al. [14] use information retrieval techniques based on cosine similarity in vector space models to assess program quality under the hypothesis that "if the code is high quality, then the comments give a good description of the code". Marcus et al. propose a novel information retrieval technique to automatically identify traceability links between code and documentation [19]. Similarly, de Lucia et al. focus on the problem of recovering traceability links between the source code and connected free text documentation. They propose a comparison between a probabilistic information retrieval model and a vector space information retrieval [16]. Even though comments are part of software documentation, previous studies on information retrieval focus generally on the relation between code and free text documentation. B. Comments Classification Several studies regarding code comments in the 80’s and 90’s concern the benefit of using comments for program comprehension [35], [31], [32]. Stamelos et al. suggest a simple ratio metric between code and comments, with the weak hypothesis that software quality grows if the code is more commented [27]. Similarly, other two authors define metrics for measuring the maintainability of a software system and discuss how those metrics can be combined to control quality characteristics in an efficient manner [21], [9]. New recent studies add more emphasis to the code comments in a software project. Fluri et al. present a heuristic approach to associate comments with code investigating whether developers comment their code. Marcus and Maletic propose an approach based on information retrieval technique [20]. Maaeje and Robillard investigate API reference documentation (such as javadoc) in Java SDK 6 and .NET 4.0 proposing a taxonomy of knowledge types. They use a combination of grounded and analytical approaches to create such taxonomy. [17]. Instead Witte et al. used Semantic Web Technologies to connect software code and documentation artifacts [34]. However, both approaches focus on external documentation and do not investigate evolutionary aspects or quality relationship between code and comments, i.e., they do not track how documentation and source code changes together over time or the combined quality factor. More in focus is the work of Steidl et al. [29]. They proposed model for comment quality based on different comment categories and use a classification based on machine learning technique tested on Java and C/C++ programs. Despite the quality of the work, they found only 7 high-level categories of comments based mostly on comment syntax, i.e., inline comments, section separator comments, task comments, etc. A different approach is adopted by Padiouleau et al. [22]. The innovative idea is to create a taxonomy based on the comment’s meaning. Even if it is more difficult to extract the content from human sentences, their proposal is a more suitable technique for defining a taxonomy. We follow this path in our work. VI. CONCLUSION Code comments contain valuable information to support software development especially during code reading and code maintenance. Nevertheless, not all the comments are the same, for accurate investigations, analyses, usages, and mining of code comments, this has to be taken into account. In this work we investigated how comments can be categorized, also proposing an approach for their automatic classification. The contributions of our work are: - A novel, empirically validated, hierarchical taxonomy of code comments for Java projects, comprising 16 inner categories and 6 top categories. - An assessment of the relative frequency of comment categories in 6 OSS Java software systems. - A publicly available dataset of more than 2,000 source code files with manually classified comments, also linked to the source code entities they refer to. - An empirical evaluation of a machine learning approach to automatically classify code comments according to the aforementioned taxonomy.
{"Source-Url": "https://pure.tudelft.nl/portal/files/40377429/PID4729011.pdf", "len_cl100k_base": 11319, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41696, "total-output-tokens": 11949, "length": "2e13", "weborganizer": {"__label__adult": 0.0004727840423583984, "__label__art_design": 0.00036978721618652344, "__label__crime_law": 0.0003170967102050781, "__label__education_jobs": 0.0014429092407226562, "__label__entertainment": 7.480382919311523e-05, "__label__fashion_beauty": 0.00020039081573486328, "__label__finance_business": 0.0002294778823852539, "__label__food_dining": 0.0004227161407470703, "__label__games": 0.0006546974182128906, "__label__hardware": 0.0007472038269042969, "__label__health": 0.0004456043243408203, "__label__history": 0.00022220611572265625, "__label__home_hobbies": 0.00013649463653564453, "__label__industrial": 0.00026345252990722656, "__label__literature": 0.00032138824462890625, "__label__politics": 0.000308990478515625, "__label__religion": 0.0004625320434570313, "__label__science_tech": 0.00322723388671875, "__label__social_life": 0.000141143798828125, "__label__software": 0.00390625, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003781318664550781, "__label__transportation": 0.0003647804260253906, "__label__travel": 0.0002417564392089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49790, 0.02021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49790, 0.38995]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49790, 0.90774]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5314, false], [5314, 10779, null], [10779, 16662, null], [16662, 22912, null], [22912, 25816, null], [25816, 31673, null], [31673, 37163, null], [37163, 41817, null], [41817, 44730, null], [44730, 49790, null], [49790, 49790, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5314, true], [5314, 10779, null], [10779, 16662, null], [16662, 22912, null], [22912, 25816, null], [25816, 31673, null], [31673, 37163, null], [37163, 41817, null], [41817, 44730, null], [44730, 49790, null], [49790, 49790, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49790, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49790, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5314, 2], [5314, 10779, 3], [10779, 16662, 4], [16662, 22912, 5], [22912, 25816, 6], [25816, 31673, 7], [31673, 37163, 8], [37163, 41817, 9], [41817, 44730, 10], [44730, 49790, 11], [49790, 49790, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49790, 0.09906]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
f9b863167cc0f0cf6659a5fb24a43ce0fe40c363
Information Needs in Contemporary Code Review Pascarella, Luca; Spadini, Davide; Palomba, Fabio; Bruntink, Magiel; Bacchelli, Alberto DOI 10.1145/3274404 Publication date 2018 Document Version Accepted author manuscript Published in Proceedings - The 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Copyright Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim. Contemporary code review is a widespread practice used by software engineers to maintain high software quality and share project knowledge. However, conducting proper code review takes time and developers often have limited time for review. In this paper, we aim at investigating the information that reviewers need to conduct a proper code review, to better understand this process and how research and tool support can make developers become more effective and efficient reviewers. Previous work has provided evidence that a successful code review process is one in which reviewers and authors actively participate and collaborate. In these cases, the threads of discussions that are saved by code review tools are a precious source of information that can be later exploited for research and practice. In this paper, we focus on this source of information as a way to gather reliable data on the aforementioned reviewers’ needs. We manually analyze 900 code review comments from three large open-source projects and organize them in categories by means of a card sort. Our results highlight the presence of seven high-level information needs, such as knowing the uses of methods and variables declared/modified in the code under review. Based on these results we suggest ways in which future code review tools can better support collaboration and the reviewing task. Preprint [https://doi.org/10.5281/zenodo.1405894]. Data and Materials [https://doi.org/10.5281/zenodo.1405902]. CCS Concepts: • Software and its engineering → Software verification and validation; Additional Key Words and Phrases: code review; information needs; mining software repositories ACM Reference Format: 1 INTRODUCTION Peer code review is a well-established software engineering practice aimed at maintaining and promoting source code quality, as well as sustaining development community by means of knowledge transfer of design and implementation solutions applied by others [2]. Contemporary code review, also known as Modern Code Review (MCR) [2, 17], represents a lightweight process that is (1) informal, (2) tool-based, (3) asynchronous, and (4) focused on inspecting new proposed code changes rather than the whole codebase [49]. In a typical code review process, developers (the reviewers) other than the code change author manually inspect new committed changes to find as many issues as possible and provide feedback that needs to be addressed by the author of the change before the code is accepted and put into production [6]. Modern code review is a collaborative process in which reviewers and authors conduct an asynchronous online discussion to ensure that the proposed code changes are of sufficiently high quality [2] and fit the project’s direction [26] before they are accepted. In code reviews, discussions range from low-level concerns (e.g., variable naming and code style) up to high-level considerations (e.g., fit within the scope of the project and future planning) and encompass both functional defects and evolutionary aspects [10]. For example a reviewer may ask questions regarding the structure of the changed code [57] or clarifications about the rationale behind some design decisions [55], another reviewer may respond or continue the thread of questions, and the author can answer the questions (e.g., explaining the motivation that led to a change) and implement changes to the code to address the reviewers’ remark. Even though studies have shown that modern code review has the potential to support software quality and dependability [17, 39, 41], researchers have also provided strong empirical evidence that the outcome of this process is rather erratic and often unsatisfying or misaligned with the expectations of participants [2, 10, 37]. This erratic outcome is caused by the cognitive-demanding nature of reviewing [7], whose outcome mostly depends on the time and zeal of the involved reviewers [17]. Based on this, a large portion of the research efforts on tools and processes to help code reviewing is explicitly or implicitly based on the assumption that reducing the cognitive load of reviewers improves their code review performance [7]. In the current study, we continue on this line of better supporting the code review process through the reduction of reviewers’ cognitive load. Specifically, our goal is to investigate the information that reviewers need to conduct a proper code review. We argue that—if this information would be available at hand—reviewers could focus their efforts and time on correctly evaluating and improving the code under review, rather than spending cognitive effort and time on collecting the missing information. By investigating reviewers’ information needs, we can better understand the code review process, guide future research efforts, and envision how tool support can make developers become more effective and efficient reviewers. To gather data about reviewers’ information needs we turn to one of the collaborative aspects of code review, namely the discussions among participants that happen during this process. In fact, past research has shown that code review is more successful when there is a functioning collaboration among all the participants. For example, Rigby et al. reported that the efficiency and effectiveness of code reviews are most affected by the amount of review participation [50]; Kononenko et al. [34] showed that review participation metrics are associated with the quality of the code review process; McIntosh et al. found that a lack of review participation can have a negative impact on long-term software quality [39, 60]; and Spadini et al. studied review participation in production and test files, presenting a set of identified obstacles limiting the review of code [54]. For this reason, from code review communication, we expect to gather evidence of reviewers’ information needs that are solved through the collaborative discussion among the participants. To that end, we consider three large open-source software projects and manually analyze 900 code review discussion threads that started from a reviewer’s question. We focus on what kind of questions are asked in these comments and their answers. As shown in previous research [12, 14, 33, 56], such questions can implicitly represent the information needs of code reviewers. In addition, we conduct four semi-structured interviews with developers from the considered systems and one focus group with developers from a software quality consultancy firm, both to challenge our outcome and to discuss developers’ perceptions. Better understanding what reviewers’ information needs are can lead to reduced cognitive load for the reviewers, thus leading, in turn, to better and shorter reviews. Furthermore, knowing these needs helps driving the research community toward the definition of methodologies and tools able to properly support code reviewers when verifying newly submitted code changes. Our analysis led to seven high-level information needs, such as knowing the uses of methods and variables declared/modified in the code under review, and their analysis in the code review lifecycle. Among our results, we found that the needs to know (1) whether a proposed alternative solution is valid and (2) whether the understanding of the reviewer about the code under review is correct are the most prominent ones. Moreover, all the reviewers’ information needs are replied to within a median time of seven hours, thus pointing to the large time savings that can be achieved by addressing these needs through automated tools. Based on these results, we discuss how future code review tools can better support collaboration and the reviewing task. 2 BACKGROUND AND RELATED WORK This section describes the basic components that form a modern code review as well as the literature related to information needs and code review participation. 2.1 Background: The code review process Figure 1 depicts a code review (pertaining to the OpenStack project) done with a typical code review tool. Although this is one of the many available review tools, their functionalities are largely the same [65]. In the following we briefly describe each of the components of a review as provided by code review tools. Code review tools provide an ID and a status (part 1 in Figure 1) for each code review, which are used to track the code change and know whether it has been merged (i.e., put into production) or abandoned (i.e., it has been evaluated as not suitable for the project). Code review tools also allow the change author to include a textual description of the code change, with the aim to provide reviewers with more information on the rationale and behavior of the change. However, past research has provided evidence that the quality and level of detail of the descriptions that accompany code changes are often suboptimal [57], thus making it harder for reviewers to properly understand the code change through this support. The fact that the change description is often not optimal strengthens the importance of the goal of our study: An improved analysis of developers’ needs in code review can provide benefits in terms of review quality [34]. The second component of a typical code review tool is a view on the technical meta-information on the change under review (part 2 in Figure 1). This meta-information include author and committer of the code change, commit ID, parent commit ID, and change ID, which can be used to track the submitted change over the history of the project. Part 3 of the tool in Figure 1 reports, instead, more information on who are the reviewers assigned for the inspection of the submitted code change, while part 4 lists the source code files modified in the commit (i.e., the files on which the review will be focused). Finally, part 5 is the core component of a code review tool and the one that involves most collaborative aspects. It reports the discussion that author and reviewers are having on the submitted code change. In particular, reviewers can ask clarifications or recommend improvements to the author, who can instead reply to the comments and propose alternative solutions. This mechanism is often accompanied by the upload of new versions of the code change (i.e., revised patches or iterations), which lead to an iterative process until all the reviewers are satisfied with the change or decide to not include it into production. Figure 2 shows a different view that contains both reviews and authors comments. In this case, the involved developers discuss about a specific line of code, as opposed to Alice from the previous example who commented on the entire code change (Figure 1, end of part 5). Implement EDP for a Spark standalone cluster This change adds an EDP engine for a Spark standalone cluster. The engine uses the spark-submit script and various Linux commands via ssh to run, monitor, and terminate Spark jobs. Currently, the Spark engine can launch "Java" job types (this is the same type used to submit Oozie Java action on Hadoop clusters) A directory is created for each Spark job on the master node which contains jar files, the script used to launch the job, the job's stderr and stdout, and a result file containing the exit status of spark-submit. The directory is named after the Sahara job and the job execution id so it is easy to locate. Preserving these files is a big help in debugging jobs. A few general improvements are included: * engine.cancel_job() may return updated job status * engine.run_job() may return job status and fields for job_execution.extra in addition to job id Still to do: * create a proper Spark job type (new CR) * make the job dir location on the master node configurable (new CR) * add something to clean up job directories on the master node (new CR) * allows users to pass some general options to spark-submit itself (new CR) Partial implements: blueprint edp-spark-standalone Change-id: I2c84e9cdb75e846754896d7c435e94bc6cc397ff Alice Uploaded patch set 1 2.2 Related Work Over the last decade the research community spent a considerable effort in studying code reviews (e.g., [3, 10, 11, 17, 20, 32, 54]). In this section, we compare and contrast our work to previous research in two areas: first, we consider studies that investigate the information needs of developers in various contexts, then we analyze previous research that focused on code review discussion, participation, and time. 2.2.1 Information needs. Breu et al. [12] conducted a study—which has been a great inspiration to the current study we present here—on developers’ information needs based on the analysis of collaboration among users of a software engineering tool (i.e., issue tracking system). In their study, the authors have quantitatively and qualitatively analyzed the questions asked in a sample of 600 bug reports from two open-source projects, deriving a set of information needs in bug reports. The authors showed that active and ongoing participation were important factors needed for making progress on the bugs reported by users and they suggested a number of actions to be performed by the researchers and tool vendors in order to improve bug tracking systems. Ko et al. [33] studied information needs of developers in collocated development teams. The authors observed the daily work of developers and noted the types of information desired. They identified 21 different information types in the collected data and discussed the implications of their findings for software designers and engineers. Buse and Zimmermann [14] analyzed developers’ needs for software development analytics: to that end, they surveyed 110 developers and project managers. With the collected responses, the authors proposed several guidelines for analytics tools in software development. Sillito et al. [53] conducted a qualitative study on the questions that programmers ask when performing change tasks. Their aim was to understand what information a programmer needs to know about a code base while performing a change task and also how they go about discovering that information. The authors categorized and described 44 different kinds of questions asked by the participants. Finally, Herbsleb et al. [29] analyzed the types of questions that get asked during design meetings in three organizations. They found that most questions concerned the project requirements, particularly what the software was supposed to do and, somewhat less frequently, scenarios of use. Moreover, they also discussed the implications of the study for design tools and methods. The work we present in this paper is complementary with respect to the ones discussed so far: indeed, we aim at making a further step ahead investigating the information needs of developers that review code changes with the aim of deepening our understanding of the code review process and of leading to future research and tools to better support reviewers in conducting their tasks. 2.2.2 Code Review Participation and Time. Extensive work has been done by the software engineering research community in the context of code review participation. Abelein et al. [1] investigated the effects of user participation and involvement on system success and explored which methods are available in literature, showing that it can have a significant correlation with system quality. Thongtanunam et al. [62] showed that reviewing expertise (which is approximated based on review participation) can reverse the association between authoring expertise and defect-proneness. Even more importantly, Rigby et al. [50] reported that the level of review participation is the most influential factor in the code review efficiency. Furthermore, several studies have suggested that patches should be reviewed by at least two developers to maximize the number of defects found during the review, while minimizing the reviewing workload on the development team [47, 49, 52, 61]. Thongtanunam et al. [60] showed that the number of participants that are involved with a review has a large relationship with the subsequent defect proneness of files in the Qt system: A file that is examined by more reviewers is less likely to have post-release defects. Bavota et al. [8] also found that the patches with low number of reviewers tend to have a higher chance of inducing new bug fixes. Moreover, McIntosh et al. [38, 39] measured review investment (i.e., the proportion of patches that are reviewed and the amount of participation) in a module and examined the impact that review coverage has on software quality. They found that patches with low review investment are undesirable and have a negative impact on code quality. In a study of code review practices at Google, Sadowski et al. [51] found that Google has refined its code review process over several years into an exceptionally lightweight one, which— in part—seems to contradict the aforementioned findings. Although the majority of changes at Google are small (a practice supported by most related work [48]), these changes mostly have one reviewer and have no comments other than the authorization to commit. Ebert et al. [23] made the first step in identifying the factors that may confuse reviewers since confusion is likely impacts the efficiency and effectiveness of code review. In particular, they manually analyzed 800 comments of code review of Android projects to identify those where the reviewers expressed confusion. Ebert et al. found that humans can reasonably identify confusion in code review comments and proposed the first binary classifier able to perform the same task automatically; they also observed that identifying confusion factors in inline comments is more challenging than general comments. Finally, Spadini et al. [54] analyzed more than 300,000 code reviews and interviewed 12 developers about their best practices when reviewing test files. As a result, they presented an overview of current code review practices, a set of identified obstacles limiting the review of test code, and a set of issues that developers would like to see improved in code review tools. Based on their findings, the authors proposed a series of recommendations and suggestions for the design of tools and future research. Furthermore, previous research investigated how to make a code review shorter, hence making patches be accepted at a faster rate. For example, Jiang et al. [31] showed that patches developed by more experienced developers are more easily accepted, reviewed faster, and integrated more quickly. Additionally, authors stated that reviewing time is mainly impacted by submission time, the number of affected subsystems by the patch and the number of requested reviewers. Baysal et al. [9] showed that size of the patch or the part of the code base being modified are important factors that influenced the time required to review a patch, and are likely related to the technical complexity of a given change. Recently, Chatley and Jones have proposed an approach aimed at enhancing the performance of code review [16]. The authors built DiggIT to automatically generate code review comments about potentially missing changes and worrisome trends in the growth of size and complexity of the files under review. By deploying DiggIT at a company, the authors found that the developers considered DiggIT’s comments as actionable and fixed them with an overall rate of 51%, thus indicating the potential of this approach in supporting code review performance. Despite many studies showing that code review participation has a positive impact on the overall software development process (i.e., number of post-release defects and time spent in reviewing), none of these studies focused on what are the developers needs when performing code review. To fill this gap, our study aims at increasing our empirical knowledge on this field by mean of quantitative and qualitative research, with the potential of reducing the cognitive load of reviewers and the time needed for the review. 3 METHODOLOGY The goal of our study is to increase our empirical knowledge on the reviewers’ needs when performing code review tasks, with the purpose of identifying promising paths for future research on code review and the next generation of software engineering tools required to improve collaboration and coordination between source code authors and reviewers. The perspective is of researchers, who are interested in understanding what are the developers’ needs in code review, therefore, they can more effectively devise new methodologies and techniques helping practitioners in promoting a collaborative environment in code review and reduce discussion overheads, thus improving the overall code review process. Starting from a set of discussion threads between authors and reviewers, we start our investigation by eliciting the actual needs that reviewers have when performing code review: • **RQ$_1$**: What reviewers’ needs can be captured from code review discussions? Specifically, we analyze the types of information that reviewers may need when reviewing, we compute the frequency of each need, and we challenge our outcome with developers from the analyzed systems and from an external company. Thus, we have three sub-questions: • **RQ$_{1.1}$**: What are the kinds of information code reviewers require? • **RQ$_{1.2}$**: How often does each category of reviewers’ needs occur? • **RQ$_{1.3}$**: How do developers’ perceive the identified needs? Once investigated reviewers’ needs from the reviewer perspective, we further explore the collaborative aspects of code review by asking: • **RQ$_2$**: What is the role of reviewers’ needs in the lifecycle of a code review? Specifically, we first analyze how much each reviewers’ need is accompanied by a reply from the author of the code change: in other words, we aim at measuring how much authors of the code under review interact with reviewers to make the applied code change more comprehensible and ease the reviewing process. To complement this analysis, we evaluate the time required by authors to address a reviewer’s need; also in this case, the goal is to measure the degree of collaboration between authors and reviewers. Finally, we aim at understanding whether and how the reviewers’ information needs vary at different iterations of the code review process. For instance, we want to assess whether some specific needs arise at the beginning of the process (e.g., because the reviewer does not have enough initial context to understand the code change) or, similarly, if clarification questions only appear at a later stage (e.g., when only the last details are missing and the context is clear). Accordingly, we structure our second research question into three sub-questions: • **RQ$_{2.1}$**: What are the reviewers’ information needs that attract more discussion? • **RQ$_{2.2}$**: How long does it take to get a response to each reviewers’ information need? • **RQ$_{2.3}$**: How do the reviewers’ information needs change over the code review process? The following subsections describe the method we use to answer our research questions. 3.1 Subject Systems The first step leading to address our research goals is the selection of a set of code reviews that might be representative for understanding the reviewers’ needs when reviewing source code changes. We rely on the well-known Gerrit platform, which is a code review tool used by several major software projects. Specifically, Gerrit provides a simplified web based code review interface and a repository manager for Git. From the open-source software systems using Gerrit, we select three: OpenStack, Android, and QT. The selection was driven by two criteria: (i) These systems have been extensively studied in the context of code review research and have been shown to be highly representative of the types of code review done over open-source projects et al. [8, 38, 39]; (ii) these systems have a large number of active authors and reviewers over a long development history. 3.2 Gathering Code Review Threads We automatically mine Gerrit data by relying on the publicly available APIs it provides. For the considered projects, the number of code reviews is over one million: this makes the manual analysis of all of them practically impossible. Thus, as done by Breu et al. [12], we select a random subset composed of 300 code reviews per project, for which we identify up to 1,800 messages (i.e., we extract a total of 900 code review threads). Since we are interested in discussions, we take into account only closed code reviews by considering both merged and abandoned patches, while we do not consider recently opened or pending requests. We detect reviewers’ questions (considering the presence of a ‘?’ sign) that start a discussion thread and we extract all the subsequent comments (made by the author, the reviewer, or other developers) in the whole thread. The considered threads refer to both patch sets and inline discussions. To better illustrate the mining process of general discussions, Figure 1 reports a code review extracted from OpenStack. As shown in the bottom of the figure (part 5), author and reviewers opened a discussion on the performed change. Figure 2 shows a thread of discussion started at line level. In both cases, all the comments among the participants represent the types of discussion threads that we use to detect the information needs of reviewers. For each identified thread, we store the following information: - the Gerrit id of the code review; - the revision id that identifies the patch set of a code review; - the opening question, the answers, and the source code URL identifier of the change; - the practitioner role e.g., author or reviewer; - the code review status, i.e., whether it is merged or abandoned; - the size of the thread counting the number of comments present into discussion; - the creation and the update time. We use the aforementioned pieces of information to answer our research questions as detailed in the following. 1https://www.gerritcodereview.com/ 2https://git-scm.com/ 3https://review.openstack.org/ 4https://android-review.googlesource.com/ 5https://codereview.qt-project.org Fig. 3. The taxonomy of reviewers’ information needs that emerged from our analysis 3.3 RQ1 - Identifying the Reviewers’ Needs from Code Review Discussions To answer RQ1, we manually identify the reviewers’ needs in code review by following a similar strategy as done in previous work on information needs [12, 14, 29, 33, 53]. Specifically, we perform a card sorting method [42] that involves all the authors of this paper (2 graduate students, 1 research associate, and 1 faculty member - who have at least seven years of programming experience). From now on, we refer to them as the inspectors. This method represents a well-established sorting technique that is used in information architecture with the aim of creating mental models and allowing the definition of taxonomies from input data [42]. In our case, it is used to organize code review threads into hierarchies and identify common themes. We rely on code review threads (i.e., questions and answers) to better understand the meaning behind reviewers’ questions that may implicitly define the reviewers’ need. Finally, we apply an open card sorting: We have no predefined groups of reviewers’ information needs, rather the needs emerge and evolve during the procedure. In our case, the process consists of the three iterative sessions described as follow. **Iteration 1:** Initially, two inspectors (the first two authors of this paper) independently analyze an initial set of 100 OpenStack code review threads each. Then, they open a discussion on the reviewers’ needs identified so far and try to reach a consensus on the names and types of the assigned categories. During the discussion, also the other two inspectors participate with the aim of validating the operations done in this iteration and suggesting possible improvements. As an output, this step provides a draft categorization of reviewers’ needs. **Iteration 2:** The first two inspectors re-categorize the 100 initial reviewers’ needs according to the decisions taken during the discussion; then, they use the draft categorization as a basis for categorizing the remaining set of 200 code review threads belonging to OpenStack. This phase is used for both assessing the validity of the categories emerging from the first iteration (by confirming some of them and redefining others) and for discovering new categories. Once this iteration is completed, all the four inspectors open a new discussion aimed at refining the draft taxonomy, merging overlapping categories or better characterizing the existing ones. A second version of the taxonomy is produced. **Iteration 3:** The first two inspectors re-categorize the 300 code review threads previously analyzed. Afterwards, the first inspector classifies the reviewers’ needs concerning the two remaining considered systems. In doing so, the inspector tries to apply the defined categories on the set of code review threads of Android and QT. However, in cases where the inspector cannot directly apply the categories defined so far, the inspector reports such cases to the other inspectors so that a new discussion is opened. Unexpectedly this event did not eventually happen in practice; in fact, the inspector could fit all the needs in the previously defined taxonomy, even when considering new systems. This result suggests that the categorization emerging from the first iterations reached a saturation [24], valid at least within the considered sample of threads. Additional validation. To further check and confirm the operations performed by the first inspector, the third author of this paper—who was only involved in the discussion of the categories, but not in the assignment of the threads into categories—individually analyzed all the code review threads belonging to the three considered projects. The inspector classified all the 900 threads according to the second version of the taxonomy, as defined through iteration 2. The inspector did not need to define any further categories (thus suggesting that the taxonomy was exhaustive for the considered sample), however in six cases there were a disagreement between the category he assigned and the one assigned by the first author: as a consequence, the two authors opened a discussion in order to reach an agreement on the actual category to assign to those code review threads. Overall, the inter-rater agreement between this inspector and the first one, computed using the Krippendorff’s $k$ [35], was 98%. Following this iterative process, we defined a hierarchical categorization composed of two layers. The top layer consists of seven categories, while the inner layer consists of 18 subcategories. Figure 3 depicts the identified top- and sub-categories. During the iterative sessions, $\approx$4% of the analyzed code review threads are discarded from our analysis since they do not contain useful information to understand the reviewers’ needs. In this way, we assign these comments to four temporary sub-categories that indicate the reasons why they are discarded (e.g., they are noise or sarcastic comments), successively, we gathered these comments, in an additional top-category Discarded. To answer RQ$_{1.1}$, we report the reviewers’ needs belonging to the categories identified in the top layer. Subsequently, to answer RQ$_{1.2}$ and understand how frequently each category of our needs appears, we verify how many information needs are assigned to each category. In this way, we can overview the most popular reviewers’ needs when performing code review tasks. We answer this research question by presenting and discussing bar plots showing the frequency of each identified category. To answer RQ$_{1.3}$, we discuss the outcome of the previous sub-RQs with developers of the three considered systems and an external company. This gives us the opportunity to challenge our findings, triangulate our results, and complement our vision on the problem. <table> <thead> <tr> <th>ID</th> <th>Years as developer</th> <th>Years as reviewer</th> <th>Working context</th> </tr> </thead> <tbody> <tr> <td>$P_1$</td> <td>15</td> <td>10</td> <td>OpenStack</td> </tr> <tr> <td>$P_2$</td> <td>20</td> <td>10</td> <td>OpenStack</td> </tr> <tr> <td>$P_4$</td> <td>25</td> <td>20</td> <td>Qt</td> </tr> <tr> <td>$P_4$</td> <td>10</td> <td>10</td> <td>Android</td> </tr> <tr> <td>FG$_1$</td> <td>8</td> <td>7</td> <td>Company A</td> </tr> <tr> <td>FG$_2$</td> <td>10</td> <td>10</td> <td>Company A</td> </tr> <tr> <td>FG$_3$</td> <td>7</td> <td>5</td> <td>Company A</td> </tr> </tbody> </table> Interviews with reviewers from the subject systems. To organize the discussion with the developers of Android, OpenStack, and Qt, we use semi-structured interviews—a format that is often used in exploratory investigations to understand phenomena and seek new insights [68]. A crucial step in this analysis is represented by the recruitment strategy, i.e., the way we select and recruit participants for the semi-structured interviews. With the aim of gathering feedback and opinions from developers having a solid experience with the code review practices of the considered projects, we select only developers who had conducted at least 100 reviews\textsuperscript{6} in their respective systems. Then, we randomly select 10 per system and invite them via email to participate in an online, video interview. Four experienced code reviewers accepted to be interviewed: two from OpenStack, one from Qt, and one from Android. The response rate achieved (17\%) is in line with the one achieved by many previous works involving developers [43, 44, 66]. Table 1 summarizes the interviewees’ demographic. The interviews are conducted by the first two authors of this work via Skype. With the participants’ consent all the interviews are recorded and transcribed for analysis. Each interview starts with general questions about programming and code reviews experience. In addition, we discuss whether the interviewees consider code reviews important, which tool they prefer, and generally how they conduct reviews. Overall, we organize the interview structure around five sections: 1. General information regarding the developer; 2. General perceptions on and experience with code review; 3. Specific information needs during code review; 4. Ranking of information needs during code review; 5. Summary. The main focus regarding the information needs is centered around points 3 and 4: We iteratively discuss each of the categories emerged from our analysis (also showing small examples where needed). Afterwards, we discuss the following main questions with each interviewee: 1. What is your experience with \texttt{<category>}? 2. Do you think \texttt{<category>} is important to successfully perform a code review? Why? 3. Do you think current code review tools support this need? 4. How would you improve current tools? Our goal with these questions is to allow us to better understand the relevance of each developer’s need and whether developers feel it is somehow incorporated in current code review tools or, if not, how they would envision this need incorporated. Successively, we ask developers to rank the categories according to their perceived importance. Our goal is to understand what the interviewees perceive as the most important needs and why. To conclude the interview, the first two authors of this paper summarize the interview, and before finalizing the meeting, these summaries are presented to the interviewee to validate our interpretation of their opinions. **Focus group with an external commercial company.** While the original developers provide an overview of the information needs identified in the context of the systems analyzed in this study, our findings may not provide enough diversity. To improve this aspect, we complement the aforementioned semi-structured interviews with an additional analysis targeting experts in assessing the source code quality of systems. In particular, we recruited three employees from a firm in Europe specialized in software quality assessments for their customers. The mission of the firm is the definition of techniques and tools able to diagnose design problems in the clients’ source code, with the purpose of providing consultancy on how to improve the productivity of their clients’ industrial developers. Our decision to involve these quality experts is driven by the willingness to receive authoritative opinions from professionals who are used to perform code reviews for their customers. The three participants have more than 15 years in assess code quality and more than 10 years of experience in code review. \textsuperscript{6}This minimum number of reviews to ensure an appropriate experience of the interviewees is aligned with the numbers used in previous studies on code review (e.g., [2]). In this case we proceed with a focus group [36, 40] because it better fits our methodology. Indeed, this technique is particularly useful when a small number of people is available for discussing about a certain problem [36, 40] and consists of the organization of a meeting that involves the participants and a moderator. The moderator starts the discussion by asking general questions on the topic of interest and then leaves the participants to openly discuss about it with the aim of gathering additional qualitative data useful for the analysis of the results. In the context of this paper, the first two authors of the paper are the moderators in a meeting directly organized in the consultancy firm. The focus group is one hour long and the participants reflected on and discuss the information needs we identified and what are the factors influencing their importance. From this analysis, our aim is also to better understand the external validity of our taxonomy. 3.4 RQ$_2$ - On the role of reviewers’ needs in the lifecycle of a code review In the context of the second research question we perform a fine-grained investigation of the role of reviewers’ needs in code review. We analyze which of them capture more replies, what is the time required for getting an answer, and whether reviewers’ needs change throughout the iterations. Specifically, we consider code review threads related to the same reviewer’s need independently. Then, to answer RQ$_{2,1}$ we computed the number of replies that each group received: this is a metric that represents how much in deep reviewers and authors should interact to be able to exchange the information necessary to address the code review. We do not assess the quality of the responses, since we aim at reporting quantitative observations on the number of answers provided by authors to a reviewer’s need. As for RQ$_{2,2}$, this represents a follow-up of the previously considered aspect. Indeed, besides assessing the number of replies for each reviewers’ need, we also measure the time (in terms of minutes) needed to get a response. This complementary analysis can possibly provide insights on whether certain needs require authors to spend more time to make their change understandable, thus providing information on the relative importance of each need which might be further exploited to prioritize software engineering research effort when devising and developing new techniques to assist code reviewers. Finally, to answer RQ$_{2,3}$ and understand how the reviewers’ needs change over the code review iterations, we measure the number of times a certain need appears in each iteration of a code review. This analysis may possibly lead to observations needed by the research community to promptly provide developers with appropriate feedback during the different phases of the code review process. As a final step of our methodology, we compute pairwise statistical tests aimed at verifying whether the observations of each sub-research question are statistically significant. We apply the Mann-Whitney test [18]. This is a non-parametric test used to evaluate the null hypothesis stating that it is equally likely that a randomly selected value from one sample will be less than or greater than a randomly selected value from a second sample. The results are intended as statistically significant at $\alpha=0.05$. We also estimate the magnitude of the measured differences by using the Cliff’s Delta (or $d$), a non-parametric effect size measure for ordinal data [27]. We follow well-established guidelines to interpret the effect size values: negligible for $|d|<0.10$, small for $0.10 \leq |d| < 0.33$, medium for $0.33 \leq |d| < 0.474$, and large for $|d| \geq 0.474$ [27]. 4 RESULTS In this section, we present and analyze the results of our study by research question. 4.1 RQ$_1$ - A Catalog of Reviewers’ Information Needs We report the results of our first research questions, which aimed at cataloging reviewers’ information needs in code review and assessing their diffusion. For the sake of comprehensibility, we answer each sub-research question independently. **RQ$_{1.1}$**: What are the kinds of information code reviewers require? Following the methodology previously described (Section 3.3), we obtained 22 groups of reviewers’ information needs. They were then clustered according to their intention into seven high-level categories that represent the classes of information needs associated with the discussion threads considered in our study. We describe each high-level category also including representative examples. N1. Suitability of An Alternative Solution This category emerged by grouping threads in which the reviewer poses a question to discuss options and alternative solutions to the implementation proposed by the author in the first place. The purpose is not only to evaluate alternatives but also to trigger a discussion on potential improvements. The example reported in the following reports a case where the reviewer starts reasoning on how much an alternative solution is suitable for the proposed code change. R: “… Since the change owner is always admin, this code might be able to move out of the loop? The following should be enough for this? [lines of code]" N2. Correct understanding In this category, we group questions in which the reviewers try to ensure to have captured the real meaning of the changes under review; in other words, this category refers to questions asked to get a consensus of reviewers’ interpretation and to clarify doubts. This is more frequent when code comments or related documentation is missing, as reported in the example shown in the following. R: “This is now an empty heading … Or do you feel it is important to point out that these are C++ classes?” A: “The entire page is split up into [more artifacts]. The following sections only refer to [one artifact]. I added a sentence introducing the section.” N3. Rationale This category refers to questions asked to get missing information that may be relevant to justify why the project needs the submitted change set or why a specific change part was implementeddesigned in a certain way. For example, a reviewer may request more details about the issue that the patch is trying to address. These details help the reviewer in better understanding whether the change fits with the project scope and style. For instance, in the example reported below the reviewer (R) asks why the author replaced a piece of code. R: “Can you explain why you replaced [that] with [this] and where exactly was failing?” N4. Code Context In this category, we grouped questions asked to retrieve information aimed at clarifying the context of a given implementation. During a code review, a reviewer has access to the entire codebase and, in this way, may reconstruct the invocation path of a given function to understand the impact of the proposed change. However, we observed that the reviewer needs contextual information to clarify a particular choice made by authors. These questions range from very specific (i.e., aimed at understanding the code behavior) to more generic (i.e., aimed at clarifying the context in which such code is executed). The author replies to such questions by providing additional explanations on the code change or contextual project details. For instance, let consider the thread reported below, where the Author (A) replies to the Reviewer (R) by pointing R to the file (and the line) containing the asked clarification. R: “In what situations would [this condition] be false, but not undefined?” A: “See [file], exactly in line [number], in this case the evaluation of the expression returns false.” R: “It may be helpful to add a comment documenting these situations to avoid future regressions.” N5. Necessity In this category, the reviewer needs to know whether a (part of) the change is really necessary or can be simplified/removed. For example, a reviewer may spot something that seems like a duplicated code, yet is unsure if whether the existing version is a viable solution or it should be implemented as proposed by the author. In the example below, the reviewer asks whether a certain piece of code could be removed. R: “Is this needed?” A: “I believe its only required if you have methods after the last enum value, but I generally add it regardless. We have a pretty arbitrary mix.” N6. Specialized Expertise Threads belonging to this category regard situations in which a reviewer finds or feels there is a code issue, however, the reviewer’s knowledge is not appropriate to propose a solution. In these cases, typically a reviewer asks other reviewers to step in and contribute with their specialized expertise. Sometimes, reviewers may ask the author to propose informal alternatives that may better address the found issue. The examples reported below show two cases where the reviewer encourages other developers to reason on how to fix an issue. R: “… Lars, Simon, any ideas? We really need to fix this for [the next release] and the time draws nigh” R: “I need a better way to handle this …not a good idea to hard code digits in there. example also needs to be removed, its there just to make the tests pass.” N7. Splittable For several reasons (including reducing the cognitive load of reviewers [5]), authors want to propose changes that are atomic and self-contained (e.g., address a single issue or add a single feature). However, sometimes, what authors propose may be perceived by reviewers as something that can be addressed by different code changes, thus reviewed separately. For this reason, a reviewer needs to understand whether the split she has in mind can be done; based on this the reviewer asks questions aimed at finding practical evidence behind this idea. In other words, this category gathers questions proposed by reviewers who need to understand whether the proposed changes can be split into multiple, separated patches. For example, the thread below reports a question where the reviewer (R) asks the author about the possibility of splitting unrelated changes, but the feasibility of this split is not confirmed. R: “This looks like an unrelated change. Should it be in a separate commit?” A: “Actually its related. The input object is needed to log the delete options.” R: “OK, I wasn’t sure because in the previous version we don’t pass ‘force’ into the method, but now we do pass it in via the ‘input’.” In addition to the aforementioned categories, we found several cases in which the presence of a question or question mark did not correspond to a real information need, similarly to the aforementioned categorized information needs, we provide an example in the following. R: “I hate name as a name. What kind of name is this?” R: ‘If you thought it was necessary to check exe() for errors, then why’d you leave out [another part] here? :)” **RQ**. How often does each category of reviewers’ needs occur? Figure 4 depicts bar plots that show the distribution of each reviewers’ need over the considered set of code review threads. The results clearly reveal that not all the information needs are equally distributed and highlight the presence of a particular type (i.e., N1. ‘Suitability of an alternative solution’), which has a way larger number of occurrences with respect to all the others (the result is consistent over the three considered systems). Thus, we can argue that one of the most useful tools for reviewers would definitively be one that allows them to have just-in-time feedback on the actual practicability of possible alternative solutions with respect to the one originally implemented by the authors of the committed code change. The second most popular category is represented by ‘Correct understanding’ (N2), i.e., questions aimed at assessing the reviewers’ interpretation of the code change and to clarify doubts. This finding basically confirms one the main outputs of the work by Bacchelli and Bird [2], who found that code review is understandability. The popularity of this category is similar in all the considered projects, confirming that this need is independent from the type of system or the developers working on it. Still, a pretty popular need is ‘Rationale’ (N3). This has also to do with the understandability of the code change, however in this case it seems that a common reviewers’ need is having detailed information on the motivations leading the author to perform certain implementation choices. Other categories are less diffused, possibly indicating that reviewers do not always need such types of information. For instance, ‘Splittable’ (N7) is the category having the lowest number of occurrences. This might be either because of the preventive operations that the development community adopts to limit the number of tangled changes [30] or because of the attention that developers put when performing code changes. In any case, this category seems to be less diffused and, as a consequence, one can claim that future research should spend more effort on different (most popular) reviewers’ information needs. RQ1.3: How do developers’ perceive the identified needs? In this section, we present the results of our interviews and focus group with developers. First, we report on the participants’ opinions on the taxonomy derived from the previous two sub-research questions, then we describe the most relevant themes that emerged from the analysis of the transcripts. We refer to individual interviewees using their identifiers (P# and FG#). Participants’ opinion on the taxonomy. In general, all interviewees agreed on the information needs emerged from the code review threads: For all the categories, the developers agreed that they were asking those types of question themselves, several times and repeatedly. Furthermore, the order of importance of the categories was also generally agreed upon: According to the interviewees, the most important and discussed topic is ‘suitability of an alternative solution’ (N1), followed by ‘understanding’ (N2), ‘rationale’ (N3), and ‘code context’ (N4). Interestingly, the ‘splittable’ (N7) category is perceived as very important for the interviewed developers, but they confirmed that it happens rarely to receive big and long patches to review. Although also the participants in the focus group agreed with the taxonomy of needs and their ranking, they stated that questions regarding ‘correct understanding’ (N2) are not common (in our taxonomy is ranked second). When discussing this difference with the focus group participants, they argued that this discrepancy was probably due to the type of projects we analyzed: Indeed, we analyzed open-source systems, while the focus group was conducted with participants working in an industrial, closed-source setting. One developer said: “if I don’t understand something of the change, I just go to my colleague that created it and ask to him. This is possible because we are all in the same office in the same working hours, while this is not the case in the projects you analyzed.” Understanding a code change to review. An important step for all the interviewees when it comes to reviewing a patch is to understand the rationale behind the changes (N3). P1 explained that to understand why the author wrote the patch, he first reads the commit message, since “[it] should be enough to understand what’s going on.” Interviewees said that it is very useful to have attached a ticket to the commit message, for example a JIRA issue, to really understand why it was necessary submitting the patch [P1,4, FG1,3]. However, sometime the patch is difficult to understand, and this leads to reviewers asking for more context or rationale of the change, as P1 put it: "Sometime the commit message just says "Yes, fix these things." And you say "Why? Was it broken? Is there a bug report information?" So in this case there is not enough description, and I would have to ask for it." Interestingly, P1 reported that this issues generally happens with new contributors or with novice developers. During the focus group, FG3 said he also uses tests to obtain more context about the change: "In general, to get more context I read the Java docs or the tests." Finally, all the interviewees explained that to obtain more context, or the rationale behind the change, they use external IRC channels (outside Gerrit) to get in touch with reviewers/authors, e.g., by emails, or Slack. **Authors’ information needs.** Considering the point of view of the author of a change, the interviewees explained that code review is sometimes used as a way to get information from specialized experts, thus underlining the dual nature of the knowledge exchange happening in code review [25, 26]. P2 explained that it is sometimes difficult for an author to have all the information they need to make the change, for example if the change is in a part of the system where they are not expert. In this case, P2 explained: "when you make a change, you usually add the experts of the system to your review, and then you ping them on IRC, asking for a review, if they have some time." Interestingly, this point also came up during the focus group, where a developer said "if it’s a new system [...] my knowledge lacks at one front, it may be technology, it may be knowledge of the system." In this case, the developer would ask the help of colleagues. This is also in line with another need we discovered in the previous research question, that is the ‘Specialized Expertise’ (N6). Indeed, interviewees said that when they are not familiar with the change, or they do not have the full context of the change, they ask an expert to contribute: "[in the project where I work] we have sub-system maintainers: they are persons with knowledge in that area and have more pleasure or willingness to work on those specialized areas. If the reviewers do not reach consensus during the review, we always ask to those experts." [P4]. **Small and concise patches.** When discussing with developers the ‘Splittable’ (N7) need, all agreed that patches should be self-contained as much as possible [P1−4, FG1−3]. P3 said: "I always ask to split it, because in the end it will be faster to get it in [the system]." P4 added that it is something that they do all the time, because usually people do not see this issue. P2 said: "It’s always better to have 10 small reviews than one big review with all the changes, because no one will review your code. It’s like that. So if you want to merge something big, it’s always better to do it in small changes." Another point raised during the interviews is that large patch sets are difficult to review and require a lot of time to read [P1−4, FG1−3], thus this may delay the acceptance of the patches. P4 explained: "You can have a large patch set that is 90% okay and 10% that not okay: the 10% will generate a lot of discussion and will block the merge of 90% of the code. So yes, it’s something that I do all the time. I ask people, you need to organize better the patch." When talking about the issue, P1 also added that having small patches is very important for making it easier to revert them: "yes this is something I find it to be really, really important. Bugs are everywhere – there is always another bug to fix. So the patch should be small enough that [in case of bugs] you can revert it without breaking any particular code." Interestingly, all the interviewees agreed that tools could help reviewers and authors in solving this need: for example, when submitting a large patch, the tool could suggest the author to split it into more parts to ease the reviewing process. **Offering a solution.** All interviewees agreed that to do a proper code review, reviewers should always pinpoint the weak parts in the code and offer a solution [P1−4, FG1−3]. P3 said: "When I request the change, I usually put a link or example because I know that maybe the other guy doesn’t know about the other approach. This is usually the main reason why somebody didn’t do something: because he didn’t know it was possible." P1 added: "[…] whenever you propose a change, you should always explain why you need to change it and what. Just putting [the score to] minus two, or even minus one without explanation, is bad because then people don’t know what to do. We have to try being more friendly as a community.” This constructive behavior was also agreed upon during the focus group: one developer said that the worst thing that can happen in a code review is a non-constructive comment. Interestingly, this reported behavior confirms what we discovered in our previous research question: indeed, ‘Suitability of an alternative solution’ (N1) is the most frequent type of question when doing code review. In addition, concerning constructive feedback, interviewees said that when they do not fully understand a change, they first ask the author explanations: “For example, if you don’t understand correctly the change this person is trying to add, you just ask him, and they are forced to answer you. And if you don’t have the context information, they should be able to provide it to you.” Interviewees said that it is better to ask explanation to the author first, and only after decide to/not merge the patch. P₄ also explained that sometime it is better to accept a patch than start a big discussion on small detail: “Even though I understand that a better solution will be doable, I’ll probably won’t propose it because a lot of times people won’t have time to actually rework on a new proposal, and you need to balance how you want the project to move forward: Sometimes it’s better to have a code that is not the best solution, but at least does not regress and it fixes a bug.” 4.2 RQ₂ - The Role of Reviewers’ Information Needs In A Code Review’s Lifecycle We present the results achieved when answering our second research question, which was focused on the understanding of the role of reviewers’ information needs in the lifecycle of the code review process. We report the results by considering each sub-research question independently. RQ₂.₁: What are the reviewers’ information needs that attract more discussion? To answer RQ₂.₁ and understand to what extent the reviewers’ needs attract developers’ discussions, we compute, for each discussion thread that we manually categorized, the number of iterations that involve the developers of a certain code review. Figure 5 depicts box plots reporting the distribution of the number of answers for each reviewers’ need previously identified (red dots indicate the mean). Approximately 18% of code review threads (considering both merged and abandoned patches of every projects) do not have an answer. The first observation regards the median value of each distribution: as shown, all of them are within one and three, meaning that most of the threads are concluded with a small amount of discussion. From a practical perspective, this result highlights that authors can address almost immediately the need pointed out by a reviewer; at the same time, it might highlight that tools able to address the reviewers’ needs identified can be particularly useful to even avoid the discussion and lead to an important gain in terms of time spent to review source code. Among the reviewers’ needs, the Fig. 6. Distribution of the number of hours needed to answer each reviewers’ need category. ‘Specialized expertise’ (N6) is the one with the largest scattering of discussion rate. This result seems to indicate that the more collaboration is required due the largest number of replies a discussion receives, which possibly preclude the integration in the codebase of important changes that require the expertise of several people. The statistical tests confirmed that there are no statistically significant differences among the investigated distributions, with the only exception of ‘Suitability of an alternative solution’ (N1), for which the $\rho$-value is lower than 0.01 and the Cliff’s $d$ is ‘medium’. This category is the one having the lowest mean (1.7) and we observed that often authors of the code change tend to directly implement the alternative solution proposed by the reviewer without even answering to the original comment. This tendency possibly explain the motivation behind this statistical difference. Overall, according to our results, most of the reviewers’ information needs are satisfied with few replies–most discussions are closed shortly. The only category having more scattered results is the one where reviewers ask for the involvement of more people in the code review process. RQ2.2: How long does it take to get a response to each reviewers’ information need? Figure 6 reports the distribution of the number of hours needed to get an answer for each group of reviewers’ information need. In this analysis we could only consider the questions having at least one answer; similarly, if a reviewer’s comment got more than one reply, we considered only the first one to compute the number of hours needed to answer the comment. Looking at the results, we can observe that the median is under 7 hours for almost all the categories. A possible reason for that consists of the nature of the development communities behind the subject systems. Indeed, all the projects have development teams that span across different countries and timezones: thus, we might consider as expected the fact to not have an immediate reaction to most of the comments made by reviewers. Some differences can be observed in the distributions of two reviewers’ information needs such as ‘Necessity’ (N5) and ‘Suitability of an alternative solution’ (N1). In this case, the median number of hours is higher with respect to the other categories (7 vs 5), while the 3rd quartiles are around one day (meaning that 25% of the questions in this category took more than one day to have a response). Conversely, the discussion of other categories generally took less time to start. For instance, the ‘Specialized expertise’ (N6) need has a median of one hour and a 3rd quantile equal to four. Such differences, however, are not statistically significant. To conclude the analysis of our findings for this research question, we can argue that developers generally tend to respond slower to questions regarding the proposal of alternatives and the evaluation of the actual necessity of a certain code change; on the other hand, questions where more reviewers are called to discuss seem to get a faster response time. RQ2.3: How do the reviewers’ information needs change over the code review process? The last research question targets the understanding of whether reviewers’ needs vary over the different iterations of the code review process. Figure 7 presents the result of our analysis, with box plots depicting the distribution of each reviewers’ information need in the various iterations: for the sake of better comprehensibility of the results, we considered the normalized number of iterations available in each of the 900 code review threads analyzed.7 Almost all the categories have their median around 0.5, meaning that the majority of reviewers’ information needs are raised in the first half of the review process. Moreover, we are not able to map reviewers’ information needs with any specific iteration. This result might indicate that there is not a time-sensitive relationship between those needs and that they arise independently from how much discussion has already been going on in the review. Besides this general conclusion, we also notice some differences between the category ‘Necessity’ (N5) and the others. In the case of the ‘Necessity’ (N5) category the median and mean reach both 0.67, thus indicating that most of such questions come later in the process. It is interesting to note that modifications aimed at performing perfective changes that improve the overall design/style of the source code rather than solving issues are mainly requested by reviewers in a later stage of the code review process, i.e., likely after that most important fixes solving problems impacting the functioning of the system are already submitted by the author and answers about understanding the context of the change are given. Such an observation may need further investigations and validation, however it may possibly reveal the possibility to devise strategies to guide the next generation of code review tools toward a selection of the information that a reviewer might need in a earlier/later stage of the code review process. 5 THREATS TO VALIDITY Our study might have been subject to a number of threats to validity that may affect our results. This section summarizes the limitations of our study and how we tried to mitigate them. Validity of the defined reviewers’ needs. Since the meaning of a question may be dependent by the context, we may lack of a full understanding of its nature and background. This type of threat may first apply to our study when we identify code review threads composed of both questions and answers: to this aim, we automatically mined the GERRIT repository that is a reliable source for the extraction of code review data [10, 28]. To extract code review threads we employed the publicly available APIs of such repository: For this reason, we are confident on the completeness of the extracted data. 7We also conducted an analysis using the absolute number of iterations, yet results were equivalent. The adopted open card sorting process is also inherently subjective because different themes are likely to emerge from independent card sorts conducted by the same or different people. To ensure the correctness and completeness of the categories associated to the reviewers’ needs identified with the card sorting, we iteratively conducted the process by merging and splitting reviewers’ need categories if needed. As an additional step, we also took into account authors responses and discussion threads when classifying questions made by reviewers, with the aim of properly understand the context in which a certain question has been made. Moreover, all the authors of this paper, who have more than seven years of experience in software development, assessed the validity of the emerged categories, thus increasing its overall completeness. Of course, we cannot exclude the missing analysis of specific code review threads that point to categories that were not identified in our study. We consider questions asked by reviewers through the Gerrit platform as indicators of the actual reviewers’ needs. This assumption may not hold for all projects, as many active projects do not use the Gerrit platform. For example, Tsay et al. [64] highlighted how several developers contribute to the software development by using different platforms (e.g., GitHub). However, we partly mitigated this threat to validity by carefully selecting software systems broadly studied in code review research [8, 38, 39] and having a large number of code review data (which indicates they actively use Gerrit). The study of different platforms such as GitHub, GitLab, or Collaborator is left for future work. **External validity.** As for the generalizability of the results, we conducted this study on a statistically significant sample of 900 code reviews that include more than 1,800 messages belonging to three well-known projects that use the Gerrit platform since 2011. A threat to validity in this category may arise when we consider closed-source projects. In that case, the experience of closed-source reviewers may affect the need to asks clarification questions, therefore, the findings that we found in open-source project may be not generalizable to a closed-source context. As part of our future research agenda, we plan to extend this study by including closed-source projects. ### 6 DISCUSSION AND IMPLICATIONS Our quantitative and qualitative results showed that reviewers have a diversity of information needs at different conceptual levels and pertaining to different aspects of the code under review. In this section we discuss how our results lead to recommendations for practitioners and designers, as well as implications for future research. 1. **Selection of assistant experts.** The results achieved by mining code review repositories and interviewing practitioners indicate that ‘Suitability of An Alternative Solution’ (N1) and ‘Correct Understanding’ (N2) are not only the most recurring needs, but also those perceived as the most important. When discussing these topics with developers from both open-source and industrial systems, we uncovered possible areas where current code review tools can offer better features. For example, a key need for the reviewers is being able to communicate with the experts of the sub-system under review; this underlines the importance of tools able to recognize developers’ expertise and create recommendations. Researchers have conducted the first steps into this direction. For instance, among others, Patanamon et al. [63] proposed RevFinder, an approach to search and recommend reviewers based on similarity of previous reviewed files, while Thongtanunam et al. [59] validated the performance of a reviewer recommendation model based on file paths similarity. An interesting novelty that emerged from our analysis, with respect to existing previous work on reviewers recommendation, is the target of the recommendation. In fact, existing reviewer recommendation mechanisms target the author of the change who has to select the reviewer and propose reviewers for full changes or files. Instead, we found that also the reviewers have the need to consult an external expert, maybe for a more specific part of the entire change under review. For instance P3 explained that “reviewers sometimes ask for other reviewers that may be more expert”, thus having an assistant that can help the selection of an expert reviewer may increase her productivity. Targeting reviewers instead of change authors and having a finer grained focus for the recommendation mechanism can lead to interesting changes in both the model (which may use different features to compute expertise and the difference of expertise among reviewers) and the evaluation approach (which may no longer be based on just matching actually selected reviewers). Further studies can be designed and conducted to better understand this novel angle. (2) Early detection of splittable changes. Even if the ‘Splittable’ (N7) category is the less frequently occurring, interviewees argued that it is really useful to automatically detect splittable code changes before a submission. For example, in the focus group all the participants (FG1−3) suggested: “if it’s an unrelated change [...], pull it out of this ticket and put it on another issue.” In fact, this would (1) decrease the time spent in detecting this issue and asking the author to re-work the change, as well as (2) reduce the risks of introducing defects in the source code [30]. Researchers have already underlined the risks of tangled code changes (i.e., non-cohesive code changes that may be divided in atomic commits) for mining software repositories approaches [30] and have proposed mechanisms for automatically splitting them. For instance, Herzig and Zeller [30] proposed an automated approach relying on static and dynamic analysis to identify which code changes should be separated; Yamauchi suggested a clustering algorithm tuned to identify unrelated changes in a commit message [69]; and Dias et al. [22] proposed a methodology to untangle code changes at a finer-granularity, i.e., by selecting the single statement of a code review that should be placed in other commits. More recently researchers also proposed untangling techniques tailored explicitly to code review [5, 58] and conducted the first experiments to measure the effects of tangled code changes on code review [21, 58] substantiating the value of separating unrelated changes. Despite these advances in splitting algorithms and their immediate practical value, no commercial code review tool offers this feature. Our analysis underlines even more the relevance of having such a feature integrated as early as possible in the development process, possibly in the development environment, so that authors send already self-contained patches for review. Moreover, despite the notable research advances in the field, we believe that there is still room for improvement, e.g., by complementing state-of-the-art methods with conceptual-related information aimed at capturing the semantic relationships between different code changes. Also, early improvements of code changes before review are in line with the work by Balachandran [4]. He reported that the time to market can be reduced also creating automatic bots able to conduct preliminary reviews [4]. In this regard, there are still plenty of opportunities and challenges on how and when bots can automatically help reviewers during their activities and whether they may be employed to assist some of the developers’ needs in code review. (3) Automatically detecting alternative solutions. In connection with the most frequent need (i.e., ‘Suitability of an alternative solution’ (N1)), an interviewee from one of the open-source projects explained to prefer to propose an alternative solution before rejecting a patch: “[...] usually I put a link or an example.” In this light, a promising avenue for an impactful improvement in code review is to integrate a toll that automatically mines alternative solutions. Accordingly, a first interesting step would be to investigate how to integrate at code review time an approach such as the one proposed by Ponzanelli et al., which systematically mines community-based resources such as forums or Stack Overflow to propose related changes [46]. Another promising starting point in this direction is the concept of programming with “Big Code,” as proposed by Vechev and Yahav [67], to automatically learn alternative solutions from the large amount of code available in public code repositories such as GitHub. (4) Synchronous communication support. The absence of a proper real-time communication channel within code review tools was a common issue that emerged from both the interviews with the open-source developers and the focus group. In fact, two interviewees ([FG1] and [FG3]) explained: “you can just go to the author and ask to him in person, and maybe it would be a long discussion [...]”. This is in line with the experience reported by developers at Microsoft in a previous study by Bacchelli and Bird [2]. Nevertheless, in-person discussions can happen only if both author and reviewer are co-located, otherwise logistic barriers could impose serious constraints [2]. Yet, open-source developers are able to fulfill this real-time communication need using alternative channels; P2 stated: “we usually have an IRC channel [...]”. The two observations suggest that, when it is possible, developers prefer to rely on direct communication to discuss feedback; this may be to avoid discussing difficult criticism online in a public forum and to have a higher communication bandwidth than small online thread comments. In both scenarios, our results show that current code review tools are clearly not able to fully satisfy the communication need of the involved people. Future work should be conducted to understand how communication can be facilitated within the code review tool itself (thus improving traceability of discussions, which is relevant for future developers’ information needs [55]); in principle, this future analysis should take into account not only technical aspects to increase the communication bandwidth, but also the social aspects that could currently hinder developers from discussing certain arguments with the current tools. (5) Automatic change summarization. ‘Correct understanding’ (N2) and ‘Rationale’ (N3) are also key information needs for reviewers. Normally this is achieved by perusing the code change description or additional comments. Nevertheless, our interviewees reported cases in which these sources of information were insufficient to fulfill this need; on this P3 reported: “I even had cases where the description didn’t have anything in common with the code”. Indeed this shows that another significant source of delay in a code review process is when patches contain unaligned or missing information (i.e., the commit message is not clear enough or it does not match with the actual patch). Code summarization techniques appear to be a good fit for this task: Indeed, past literature presented different summarization techniques that can be used to both produce or check the current documentation. For example, Buse and Weimer proposed a technique to synthesize human-readable documentation starting from code changes [13], but also several other researchers have been contributing with more approaches: Canfora et al. experimented LDIFF [15], Parnin and Görg developed CILDIFF [45], and Cortés-Coy et al. designed CHANGEScribe [19]. Our analysis suggests that supporting code review is a ripe opportunity for research on code summarization techniques to have another angle of impact on a real-world application. 7 CONCLUSIONS Modern code review is an important technique used to improve software quality and promote collaboration and knowledge sharing within a development community. In a typical code review process, authors and reviewers interact with each other to exchange ideas, find bugs, and discuss... alternative solutions to better design the structure of a submitted code change. Often reviewers are required to inspect author patches without knowing the rationale or without being aware of the context in which a code change is supposed to be plugged-in. Therefore, they must ask questions aimed at addressing their doubts, possibly waiting for a long time before getting the expected clarifications. This might potentially result in causing delays in the integration of important changes into production. In this work we investigate the reviewers’ information needs by analyzing 900 code review threads of three popular open-source software systems (OpenStack, Android, and QT). Moreover, we conduct four semi-structured interviews with developers from the considered projects and one focus group with developers from a software quality consultancy company, with the aim of challenging and discussing our outcome. We discovered the existence of seven high-level reviewers’ information needs, which are differently distributed and have, therefore, different relevance for reviewers. Furthermore, we analyzed the role played by each category of reviewers’ information needs across the lifecycle of a code review, and in particular what are the reviewers’ information needs that attract more discussion, for how long a reviewer should wait to get a response, and how the information needs change over the code review lifecycle. Based on our findings, we provide recommendations for practitioners and researchers, as well as viable directions for impactful tools and future research. We hope that the insights we have discovered will lead to improved tools and validated practices which in turn may lead to higher code quality overall. ACKNOWLEDGMENTS This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 642954. Bacchelli and Palomba gratefully acknowledge the support of the Swiss National Science Foundation through the SNF Project No. PP00P2_170529. REFERENCES Received April 2018; revised July 2018; accepted September 2018
{"Source-Url": "https://pure.tudelft.nl/portal/files/53074402/cscw_author_version.pdf", "len_cl100k_base": 16310, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 75448, "total-output-tokens": 22849, "length": "2e13", "weborganizer": {"__label__adult": 0.000438690185546875, "__label__art_design": 0.0003192424774169922, "__label__crime_law": 0.00035834312438964844, "__label__education_jobs": 0.0023288726806640625, "__label__entertainment": 5.447864532470703e-05, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.0002765655517578125, "__label__food_dining": 0.00031948089599609375, "__label__games": 0.00069427490234375, "__label__hardware": 0.0004184246063232422, "__label__health": 0.0003633499145507813, "__label__history": 0.0001928806304931641, "__label__home_hobbies": 8.571147918701172e-05, "__label__industrial": 0.0002200603485107422, "__label__literature": 0.0002913475036621094, "__label__politics": 0.00032591819763183594, "__label__religion": 0.0003924369812011719, "__label__science_tech": 0.00231170654296875, "__label__social_life": 0.0001214146614074707, "__label__software": 0.004367828369140625, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003437995910644531, "__label__transportation": 0.0003886222839355469, "__label__travel": 0.00020134449005126953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 98590, 0.03236]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 98590, 0.25232]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 98590, 0.92691]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1226, false], [1226, 4024, null], [4024, 8469, null], [8469, 12231, null], [12231, 14408, null], [14408, 16830, null], [16830, 21275, null], [21275, 24733, null], [24733, 27815, null], [27815, 30990, null], [30990, 34741, null], [34741, 38638, null], [38638, 42488, null], [42488, 45450, null], [45450, 48463, null], [48463, 50134, null], [50134, 54453, null], [54453, 58958, null], [58958, 62040, null], [62040, 65257, null], [65257, 68201, null], [68201, 72293, null], [72293, 76388, null], [76388, 80321, null], [80321, 84322, null], [84322, 89804, null], [89804, 95583, null], [95583, 98590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1226, true], [1226, 4024, null], [4024, 8469, null], [8469, 12231, null], [12231, 14408, null], [14408, 16830, null], [16830, 21275, null], [21275, 24733, null], [24733, 27815, null], [27815, 30990, null], [30990, 34741, null], [34741, 38638, null], [38638, 42488, null], [42488, 45450, null], [45450, 48463, null], [48463, 50134, null], [50134, 54453, null], [54453, 58958, null], [58958, 62040, null], [62040, 65257, null], [65257, 68201, null], [68201, 72293, null], [72293, 76388, null], [76388, 80321, null], [80321, 84322, null], [84322, 89804, null], [89804, 95583, null], [95583, 98590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 98590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 98590, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1226, 2], [1226, 4024, 3], [4024, 8469, 4], [8469, 12231, 5], [12231, 14408, 6], [14408, 16830, 7], [16830, 21275, 8], [21275, 24733, 9], [24733, 27815, 10], [27815, 30990, 11], [30990, 34741, 12], [34741, 38638, 13], [38638, 42488, 14], [42488, 45450, 15], [45450, 48463, 16], [48463, 50134, 17], [50134, 54453, 18], [54453, 58958, 19], [58958, 62040, 20], [62040, 65257, 21], [65257, 68201, 22], [68201, 72293, 23], [72293, 76388, 24], [76388, 80321, 25], [80321, 84322, 26], [84322, 89804, 27], [89804, 95583, 28], [95583, 98590, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 98590, 0.02452]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
ef04e777950e13e5018a78122177e7f526df8659
Estimating latency and concurrency of Asynchronous Real-Time Interactive Systems using Model Checking Stephan Rehfeld† Beuth Hochschule für Technik Berlin, Germany Marc Erich Latoschik‡ Universität Würzburg, Germany Henrik Tramberend‡ Beuth Hochschule für Technik Berlin, Germany Abstract This article introduces model checking as an alternative method to estimate the latency and parallelism of asynchronous Realtime Interactive Systems (RISs). Five typical concurrency and synchronization schemes often found in concurrent Virtual Reality (VR) and computer game systems are identified as use-cases. These use-cases guide the development a) of software primitives necessary for the use-case implementation based on asynchronous RIS architectures and b) of a graphical editor for the specification of various concurrency and synchronization schemes (including the use-cases) based on these primitives. Several model-checking tools are evaluated against typical requirements in the RIS area. As a result, the formal model checking language Rebeca and its model checker RMC are applied to the specification of the use-cases to estimate latency and parallelism for each case. The estimations are compared to measured results achieved by classical profiling from a real-world application. The estimated results of the latencies by model checking approximated the measured results adequately with a minimal difference of 3.9% in the best case and -26.8% in the worst case. It also detected a problematic execution path not covered by the stochastic nature of the measured profiling samples. The estimated results of the degree of parallelization by model checking are approximated with an minimal difference of 9.3% and a maximal difference of -28.8%. Finally, the effort of model checking is compared to the effort of implementing and profiling a RIS. Index Terms: D.2.4 [Software/Program Verification]: Model checking—; D.2.8 [Metrics]: Performance measures—; D.4.8 [Performance]: Modeling and prediction—; 1 Introduction Many Virtual, Augmented, and Mixed Reality systems (VR, AR, MR), as well as several of today’s immersive first-person computer games exhibit strong real-time requirements. Taking into account the severe psycho-physical artifacts caused by latency [16], specifically for high immersive setups based on head-tracking, these so-called Real-Time Interactive Systems should be considered at least firm, if not hard real-time [24]. If given timing deadlines are missed, the system’s quality of service not only is degraded, but the results might be harmful. To fulfill increased timing requirements of firm and hard real-time, developers have to control the timeliness of all of the underlying system(s). Modern computer systems use optimization techniques such as out-of-order-execution, pipelining, caching, and branch-prediction. The algorithms of these techniques are in general deterministic, but in practice, this deterministic behavior is not transparent any longer to developers due to the sheer complexity of the interplay of all optimizations. Hence, the time a program needs to be executed can not easily be determined by counting every instruction and multiplying each instruction with the processor clock-ticks necessary for their execution. Additionally, consumer operating systems are built around multi-tasking and multi-user capabilities with fair scheduling as well as multiple service features (including networking), all of the latter were never meant to provide firm or hard real-time features in the first place. Finally, RIS-architectures always had to deal with the non-determinism caused by user as well as by non-deterministic algorithms. Prominent examples include changes of the view frustum due to user-defined camera movements (e.g., caused by head-tracking), which may dramatically change the number of graphics primitives to be processed from one simulation step to the next, or heuristic search algorithms often used in artificial intelligence (AI) modules. Hence, timeliness is now affected by multiple hard- and software factors, many of these are out of control for a white-box analysis by developers, which is a big obstacle for the required latency optimizations and real-time capabilities. In the past, latency was often reduced by increasing frequency, achieved by higher clock-speeds of processors. For several years now, processing speed is no longer achieved by higher clock-speeds, but by an increased parallelism of computations. Distributing the various tasks of a RIS on multiple nodes, CPUs, or cores has been a central aspect of RIS frameworks and tools for years. Ideally, a concurrent architecture would utilize as many computing units, e.g., cores, as available without having these units to wait for each other. The latter requires a thoughtful incorporation of non-blocking behavior without compromising the overall consistency of the simulated environment. Non-blocking behavior makes a system even more stochastic from a programmer’s point of view. A non-blocking call to a client, e.g., by internally branching the thread of control, by triggering an event, or by sending a message now may not cause immediate processing of the client task. Not only does such a behavior have an impact on the adherence to real-time constraints, it also has an impact on the overall consistency of a simulated scene. By relaxing almost every sequential ordering, common pitfalls of concurrent systems such as dead locks and race conditions emerge. As of today, most RIS applications use coarse-grained concurrency schemes that isolate dedicated tasks like input processing, physics simulation, AI, application logic, or rendering as concurrently running tasks. Some advanced approaches also provide a much finer-grained concurrency, but in any case, all of these approaches will have to include some synchronization primitives to explicitly assure consistency across all simulation tasks while still supporting asynchronous behavior where possible. So far, the only viable approach to control timeliness, to reduce latency, and to fulfill increased timing requirements are runtime tests and extensive profiling for a later optimization. Profiling depends on either direct source code access for white-box instrumentation or the availability of profiling tools tailored for the specific RIS platform. A prominent example is the now retired SGI Performer platform, which was developed to support concurrency on multi-CPU platforms [42]. Today, profiling support is commonly provided for dedicated tasks like graphics rendering, to complete frameworks like Unity3D or the Unreal Engine. †e-mail:rehfeld@beuth-hochschule.de ‡e-mail:latoschik@uni-wuerzburg.de e-mail:tramberend@beuth-hochschule.de Still, profiling and testing a RIS applications is an extremely time-consuming task for two reasons: First, because the number of possible execution paths grows about exponentially due to all the sources of non-determinism. Second, in order to profile a system one must first develop these systems with all the identified aspects which may influence the target measurements. An alternative approach to profiling is model checking. Model checking uses a formal model of the target system. This model is then checked by a computer program for various software quality properties. The model also has to capture all identified aspects which may influence the target measures but this often is much less work than building a complete system. Hence it is a good method to detect potential problems of an architecture beforehand. Model checking is already common in the development of software with zero-fault tolerance, such as the software of the mars rover Curiosity [22] and the development of large clustered systems like Amazon Web Service [38]. While it is a promising method to tackle the problems of timeliness control and concurrency of RIS applications, to our knowledge, model checking has never been used in the development of RISs so far. This motivates the following questions: Q1: Can we apply model checking to predict the behavior of a concurrent RIS with respect to latency and degree of parallelism? Q2: In the positive case of Q1, how does model checking perform in absolute quantities in comparison with profiling, the state-of-the-art in RIS engineering? Today, many VR- and AR-related research questions refrain from low-level technical details. The impression may be given, all low-level technical aspects are solved, considering the widespread availability of sophisticated ready-to-use software packages like Unity or the Unreal Engine etc., systems with a strong background in the related area of computer games. But the late promises of consumer VR have initiated an increased interest in solving the stochastic nature of the problem. Hence, alternatives would help to improve these tools. Maybe not just coincidentally, some game studios have currently decided to stop using these ready-made packages and to start own developments to be able to explore alternative solutions, e.g., Deck13 with the FLEDGE Engine, GameDuell with its own development based on HAXE or even small companies such as Black Pants Game Studio (Scape Engine). The article proceeds with a review of the related work followed by an analysis and identification of potential model checking tools for the given task. This section is followed by an identification of five typical coarse-grained RIS concurrency and synchronization schemes as use-cases. These schemes guide the development of necessary implementation primitives to be applied to the chosen target RIS platform. A graphical editor is introduced which supports the specification of various concurrency and synchronization schemes. This editor is used to specify the required formal models of the use-cases which are then model-checked. The accompanied implementations of the use-cases are profiled and both results from model-checking and profiling are compared and discussed. Finally, the effort of model checking is compared to the effort of implementing and profiling a RIS. 2 RELATED WORK 2.1 Real-Time Interactive Systems 2.1.1 Concurrency and Parallelism Many computations inside of a RIS are data-parallel problems, e.g., rendering and physics simulation. Therefore, they are often performed on the GPUs, which are usually tailored for data-parallel concurrency. In contrast, multi-core CPUs target task-parallel concurrency. Hence, tasks that can run in parallel need to be identified, scheduled to available cores, and the results need to be merged into a consistent world state, as is often the case for the tasks taking place in the application stage. In the past, the VR community focused on coarse-grained task-parallel concurrency schemes for the application stage, like in Lightning [7], OpenMASK [34], ViSTA [3], DLoVe [12], Avango [45], and many state-of-the-art game engines like Unity3d or the Unreal Engine. Developers could always introduce a finer-grained concurrency in these systems, e.g., using the standard methods of concurrent programming provided by the underlying operating system, but then they had to cope directly with all the challenges as described by Lee [32]. Recent RIS platforms explicitly support fine-grained concurrency to better utilize the computational power of modern multi-core CPUs. For example, FlowVR [33] introduces a concurrent data-flow network where each node of the graph runs concurrently and Simulator X [31] exploits Hewitt’s Actor Model [19] and message passing. 2.1.2 Latency Latency in general is the delay between cause and effect. In context of a RIS the most important latency from the user’s point of view is the end-to-end latency: “the end-to-end latency is the time taken from an input device changing state to a consequent change on the screen” [44]. A high end-to-end latency can cause the following consequences [16, 14]: - Induce simulator sickness. - Change the user’s behavior and lower his/her ability to perform tasks like reaching, grasping, or object tracking. - Change the way multisensory information is combined into a percept. Upon Mine [36], Steed [44] identified sources for latency depicted in Figure 1. Here, prominent approaches tackle the latency at the very end of the overall processing pipeline. Post-Rendering 3D Warping [35] warps the rendered image w.r.t. the most current position and orientation of the user’s head at the end of the rendering stage to reduce the effect of latency causing a false (slightly outdated) perspective. Similarly, frameless rendering [11] also targets the same problem but uses a per-pixel calculation to include the most recent changes to the scene and the perspective. Both techniques have shown to be effective. However, they cope as much as possible with resulting latencies as caused either between sensor read and final rendering or by the application stage computations (see Figure 1), they don’t target the latency problem at the potential sources in the application stage. 2.1.3 Concurrency and Latency Control Usually, the performance of a RIS is determined by benchmarking or profiling. The platform code of the RIS is instrumented to collect data for calculating specific metrics like parallelism and latency. Applications are then executed for various execution paths using the instrumented platform code. Furthermore, some micro benchmarks are common to only measure specific aspects. In most cases, the platform is instrumented manually by the person that wants to perform the benchmark. Most commercial platforms, e.g., the retired SGI Performer [42], Unity3d, or the Unreal Engine 3, and some research platforms, e.g., Simulator X [40], have dedicated profiling tools, that instrument the platform and calculate metrics. Furthermore, GPU manufacturer do offer frame-profiling tools to analyze the rendering of single frames, such as AMD GPU PerfStudio [2] and NVIDIA Nsight [39]. Results from benchmarking and profiling are usually used in short development cycles to optimize an application. The measured results are used to identify performance bottlenecks. The identified spots are tuned and the measurement starts again. The problem of this approach is that concurrent systems usually do have thousands or millions paths of execution and the measurements do only cover a small fraction of them. The non-deterministic sources dramatically increase this problem. Hence, a system may have execution paths that totally differ from what has been measured, and this is highly context-dependent. The stochastic approach of performing benchmarks or profiling for a longer time does not solve this problem, because there never is a guarantee that problematic paths really occur. A good example is a system described by Lee [32] that deadlocked after four years in production use, despite careful source code reviews and tests. ### 2.2 Model Checking Starting in the 1980's, model checking became the primary method to reason about correctness [10]. Most programs can be described as a finite graph $M$, that consists of states, additional properties for each state, and transitions between states. The graph $M$ is also called state space. Model checking uses a formal language to specify an algorithm or system and to then generate the state space $M$ out of the formal specification. Afterwards, it is checked if the state space holds certain constraints. Two families of formal languages exist, (a) imperative language and (b) declarative languages. Imperative languages usually are close to programming languages, such as C or Java, while declarative ones do have a more mathematical style based, e.g., on first order logic and temporal operators. Both approaches proved their value in real world scenarios. The imperative language PROMELA was used during the software development for the mars rover Curiosity [22], while the declarative language TLA+ was used for the development of E3, DynamoDB, EBS, and the internal distributed lock manager of Amazon Web Services [38]. Curiosity successfully landed and Amazon Web Services work stably and reliably for thousands of customers. Other formal languages and model checkers have been used in specific target areas, e.g., UPPAAL [28] is often used in industrial projects [6], especially for embedded systems [4, 5]. Using model checking to evaluate the architecture and concepts of a RIS requires additional efforts. First, the developers need to become familiar with model checking and need to learn a specification language. Second, specifications need to be written and checked. Newcombe et. al. [38] report about the efforts at AWS to learn PlusCal and TLA+. Developers typically learn these languages within 2 – 3 weeks. Writing a specification is done within “a couple of weeks” [38]. Lamport [26] explored model checking using PlusCal on an algorithm [13] that was known to contain a bug [15]. Lamport’s effort to write a PlusCal specification of the algorithm was about 10 hours. ### 2.3 Discussion A formal method promises to overcome some of the deficits caused by the stochastic nature of profiling and testing. The requirements for low latency and high computational power of RISs seem predetermined for model checking. Depending on the quality of the model, it may also weaken the influence of the many non-determinisms in RIS applications discussed before. Hence, we will apply model checking to typical RIS concurrency and synchronization schemes and will compare the results to a common profiling approach. We will perform this task using an actor-based platform. We chose Simulator X [31] since 1) it is freely available on GitHub and 2) it has been instrumented in prior work for the necessary profiling [40]. In the actor model, every concurrently running thread of control (called process in the model) can communicate asynchronously with every other. This asynchronous communication can be seen as a generalization of subroutine calls, now extended to concurrent architectures. This generalization will a) allow to implement various concurrency and synchronization schemes an it will b) render the results achieved on top of this model applicable to other systems as well. In addition, it will maximize parallelism and minimize latency due to increased performance. On the other hand, it will require explicit synchronization primitives to gain fine-grained control over frequencies and triggering order of tasks. All typical RIS architectures can be implemented on-top of the actor model and the underlying asynchronous message passing system. However, certain programming techniques– while possible in general– are considered deprecated to unleash the full potential of software quality aspects the model provides, e.g., direct unguarded shared memory access. ### 3 Identifying a Suitable Model Checker We have compared 9 formal languages and model checkers to identify suitable candidates to reason about the parallelism and latency of a concurrent RIS. Two type of criteria are used: Specific criteria are important for our purpose to use model checking to estimate the latency and parallelism of a concurrent RIS on the target platform. General criteria cover aspects that make the formal language and the model checker easier to use. Both criteria were combined but weighted $2:1$ in the final result to increase the RIS-specific utility. A detailed discussion about every candidate and every criteria is beyond the scope of this article. Instead the results are presented in Table 1. The criteria are explained in more detail in the following sections 3.1 and 3.2. Rebeca won this comparison and is the best candidate for the goal of the work presented here. Rebeca [43] is an imperative specification language for message-passing-based systems, designed to close the gap between formal methods and software engineering [41]. The syntax is similar to Java. Rebeca is documented by a handbook [20] and several publications. Its syntax is similar to Java, hence it is easy to learn for many developers. An extension to Rebeca, called Timed Rebeca, exists to model check specification of real-time systems. The Rebeca Model Checker [1] (RMC) is an open-source tool written in Java. RMC exports the state space in an XML file, hence it is easy to integrate to other tools. #### 3.1 General Criteria **GC1: Documentation** A good documentation makes it easier to write specifications in a formal language and to use its model checker. A language will receive the grade ++ if an up to date book exists about it. It receives + if several scientific publications written by different people exist. The grade 0 is given if only publications of one original author of the language exists. **GC2: Widely used** Wide usage of a language does have two advantages. First, it points that the language and the model checker reached a mature state usable for real life scenarios. Second, templates for standard problems can be found, that shortens the time... required to write a specification. The grade ++ is given if a language was used for real life applications. The grade + is given if the language is used by several research groups at different institutes. A −− is given if the tool was only used by the original author. **GC3: Implementation** This criterion describes if an implementation of a model checker is available to the public. A language is rated ++ if a model checker and its source code is available. It receives a + if a closed source implementation is available and a −− if no implementation is available to the public. ### 3.2 Specific Criteria **SC1: Real-time** RIS are real-time systems. Hence, it is useful if the language itself or the standard library contains elements for the specifications of real-time systems. A language is rated ++ if the language itself contains elements for real-time behavior. The grade + is given if not the language but a standard library contains elements for real-time systems. A 0 is given if real-time behavior needs to be specified from scratch. **SC2: Message passing** The chosen target platform uses message passing. Hence, it is advantageous if the formal language already contains elements for message passing or if a reusable specification exists. A language receives a ++ if it contains elements for message passing and a + if message passing is not supported directly but can easily be specified. The grade 0 is given if message passing can be specified. The grade −− is given if there exists any known shortcoming for creating a reusable message passing specification. **SC3: Integration to other tools** Our goal is to integrate model checking into the development process of a RIS application. Hence, it is important that the model checker can be integrated into the current tool chain. Simulator X is implemented in Scala [30], which produces byte-code for the Java Virtual Machine. Thus, if the model checker is implemented in Java, it receives a ++, and a 0 if it is written in another language. ### 4 Concurrency and Synchronization Schemes This section identifies different prominent concurrency and synchronization schemes (CSSs) that deal with triggering order and frequency. Recent RIS architectures support various concurrency schemes, from coarse-grained architectures which encapsulate classical sub-systems for, e.g., input, graphics, physics, or AI, to fine-grained concurrency which provides inter-sub-system parallelization of various degrees. While the chosen RIS platform Simulator X belongs to the latter group, the examples use a coarse-grained concurrency to increase interpretation and utility of the results for the wide range and de-facto standard of existing systems. The identified CSSs are explained based on a typical VR scenario consisting of the following sub-systems: 1. A **tracker connection** that communicates with a tracking system and provides the transformation of the user’s head. 2. An **application logic** that manipulates the scene upon user’s interaction, such as spawning new objects after the user pushed a button. 3. A **physics simulation** that simulates the physical behavior of objects in the scene. 4. A **renderer** that renders the scene using a graphics port. **CSS1: Sub-systems run in sequence** This is one of our two base-line schemes. All sub-systems run in sequence. First, the application logic performs its calculations. Afterwards, it triggers the physics simulation. Finally, the physics simulation triggers the renderer, that renders the new scene and the loop starts again. The frequency of the tracker is controlled by the tracking system and hence is not part of the loop. The scheme is illustrated in Figure 2a. **CSS2: All unbound** The second base-line scheme is that all sub-systems run with the maximum possible frequency, as illustrated in Figure 2b. After performing a simulation step, the sub-system updates its internal world state upon the data it received and triggers itself. In this configuration, each subsystem runs with its maximum frequency. Usually, an unbound frequency is neither necessary nor desirable. An extremely high frequency consumes a lot of processing power and has no advantages for the simulation quality. We still provide this extreme test case to later check the limitations of our approach. **CSS3: Each at fixed frequency** Limiting the frequency within the system is reasonable. For numeric stability, the physics simulation may run with a higher frequency, e.g., 120Hz. The renderer should have the same frequency as the output channel it is connected to. Because a user can not recognize changes faster than the frequency of the renderer, the application logic may have the same frequency. The scheme that is illustrated in Figure 2c, was suggested by Mönkkönen [37]. ### Table 1: Comparison of nine formal languages and its model checkers. <table> <thead> <tr> <th>Points</th> <th>8</th> <th>8</th> <th>-1</th> <th>-4</th> <th>10</th> <th>16</th> <th>6</th> <th>3</th> <th>12</th> </tr> </thead> <tbody> <tr> <td>SC1: Message passing</td> <td>+</td> <td>+</td> <td>++</td> <td>--</td> <td>0</td> <td>++</td> <td>--</td> <td>++</td> <td>++</td> </tr> <tr> <td>SC2: Integration</td> <td>++</td> <td>++</td> <td>--</td> <td>--</td> <td>++</td> <td>++</td> <td>+</td> <td>++</td> <td>++</td> </tr> <tr> <td>SC3: Integration</td> <td>++</td> <td>++</td> <td>--</td> <td>--</td> <td>++</td> <td>++</td> <td>0</td> <td>++</td> <td>0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Points</th> <th>8</th> <th>8</th> <th>-1</th> <th>-4</th> <th>10</th> <th>16</th> <th>6</th> <th>3</th> <th>12</th> </tr> </thead> <tbody> <tr> <td>SC1: Message passing</td> <td>+</td> <td>+</td> <td>++</td> <td>--</td> <td>0</td> <td>++</td> <td>--</td> <td>++</td> <td>++</td> </tr> <tr> <td>SC2: Integration</td> <td>++</td> <td>++</td> <td>--</td> <td>--</td> <td>++</td> <td>++</td> <td>0</td> <td>++</td> <td>0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Points</th> <th>8</th> <th>8</th> <th>-1</th> <th>-4</th> <th>10</th> <th>16</th> <th>6</th> <th>3</th> <th>12</th> </tr> </thead> <tbody> <tr> <td>SC1: Message passing</td> <td>+</td> <td>+</td> <td>++</td> <td>--</td> <td>0</td> <td>++</td> <td>--</td> <td>++</td> <td>++</td> </tr> <tr> <td>SC2: Integration</td> <td>++</td> <td>++</td> <td>--</td> <td>--</td> <td>++</td> <td>++</td> <td>0</td> <td>++</td> <td>0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Points</th> <th>8</th> <th>8</th> <th>-1</th> <th>-4</th> <th>10</th> <th>16</th> <th>6</th> <th>3</th> <th>12</th> </tr> </thead> <tbody> <tr> <td>SC1: Message passing</td> <td>+</td> <td>+</td> <td>++</td> <td>--</td> <td>0</td> <td>++</td> <td>--</td> <td>++</td> <td>++</td> </tr> <tr> <td>SC2: Integration</td> <td>++</td> <td>++</td> <td>--</td> <td>--</td> <td>++</td> <td>++</td> <td>0</td> <td>++</td> <td>0</td> </tr> </tbody> </table> CSS4: Start all at the same time To increase parallelism, renderer, physics, and application logic can start in parallel like shown in Figure 2d. This scheme usually adds latency between the simulation subsystems and the renderer. CSS5: Bind renderer to tracker In VR it is always desirable to have a low latency between input and output. Hence, it is often more reasonable to couple the renderer to the tracking connection and not to the simulation subsystems (Figure 2e). A latency between the physics simulation and the renderer is hardly recognizable, while a latency between moving the head and seeing and updated scene, rendered with the new view frustum, is usually easily recognizable by the user. The other sub-systems run with their own frequencies, such as in CSS3. 5 PRIMITIVES Five different synchronization and concurrency primitives are identified which are necessary for the specification of the schemes CSS1–5 in the formal Rebeca language, and for the implementation in the asynchronous target RIS platform. 5.1 P1: Unmoved mover The unmoved mover exists only one time and sends one message to a set of other processes. It is the starting point for an application and is required for all CSSs. 5.2 P2: Sub-system A sub-system is a process that performs the calculation, communicates the results to other processes, or communicates with I/O channels. This could be a renderer, physics simulation, artificial intelligence, or any other part from coarse to fine granularity. A sub-system can perform the following three tasks: 1. Update its internal world state upon incoming messages. 2. Modify its internal world state, or using input or output channels. 3. Communicate its internal world state to other sub-systems. Hence, sub-systems need two types of messages: MT: Triggers local (sub-system-specific) simulation step(s). MS: Communicates (parts of) the new world state. We assume that a new world state can be communicated by multiple messages of type MS. Besides receiving and communicating new world states, a sub-system may trigger other processes, especially other sub-systems. Here, the condition and point in time when a trigger message is sent is crucial. We call the mechanism of sending these messages synchronization point. A synchronization point is a condition that may eventually become true and upon which a sub-system sends a trigger. Two synchronization points are obvious for coarse grained sub-systems: 1. **OnStepBegin**, when the calculation of a new simulation step begins. 2. **OnStepEnd**, when the calculation of a new simulation step finished. ### 5.3 P3: Frequency Limiter Because an extremely high frequency is neither desirable nor useful, a process is required to limit the frequency. In general, a loop is realized in a message passing based system by a process that sends a message to itself. A frequency limiter is added to such a loop and delays a message it has received. For example, if the frequency limiter does have an upper limit of 60 Hz, and it receives two messages within a time span less than 16666 µs, the second message is forwarded with delay. The limiter is used in all schemes, but CSS2. If the other process in the loop needs too much time, the frequency simply drops. ### 5.4 P4: Frequency Trigger External devices, such as the tracker, do have a fixed frequency, that can not be controlled by the application. Hence, an element is required that always sends a message in a fixed frequency. While the frequency limiter relies on the circumstance that the process triggers itself, the frequency trigger sends a message to another process in a fixed frequency. If the triggered process needs too much time, the trigger still sends messages and the triggered process will saturate. ### 5.5 P5: Barrier In CSS4 all subsystems are started at the same time. Hence, there needs to be an element that sends a trigger after it is received a specified set of messages. For example, in CSS4, the condition is that it received the trigger from the renderer, physics, and logic. ### 6 IMPLEMENTATION In this section, a prototypical implementation of a model checker for concurrency and synchronization schemes is described. The overall tool chain of the model checker is illustrated in Figure 3a. The remainder of this section is structured along the tool chain. #### 6.1 Graphical Editor Rebeca does not have any abstraction mechanisms, such as inheritance or a template mechanism like in C++. Defined processes can only be parametrized by constructor parameters. Furthermore, the set of other processes to which a process is able to communicate with, must be known at definition time of the process. To overcome these limitations, we created a graphical editor to specify concurrency and synchronization schemes such as described in Section 4 by using the primitives described in Section 5. The graphical editor is shown in Figure 4. It was implemented using NetBeans RCP. In Figure 4, primitive **P1** is part of every specification by default and hence cannot be removed. Primitives **P2** – **P5** can be added using a palette of primitives and are shown as nodes. Edges between the nodes represent communication between the primitives. Together they define the modeled processes and the communication topology. Additionally, durations of simulation steps and the processing of messages can be configured. Finally, a code generator generates a Rebeca specification out of the graphically defined models. #### 6.2 Rebeca Spec The automatically generated Rebeca specification file consists of all required primitives, the communication topology, and timing information. It currently doesn’t cover the full Rebeca specification language but only the relevant parts required here. Hence, we also developed a text editor for Rebeca specifications to further alleviate customization of the specification. This editor was developed by extending NetBeans’s standard text editor. ### 6.3 RMC The generated specification is checked using RMC. RMC itself can already check if the specification deadlocks or if processes do saturate. Furthermore, RMC exports the state space into an XML file. ### 6.4 State Space This XML file contains states and transitions. Besides other information, a state contains the clock of each process at this state. Furthermore, the transitions contain the information which process processed which message. Care needs to be taken, because this XML file can become very large, depending on the granularity of the model. The state space is parsed by the graphical editor again to perform further analysis of the state space. ### 6.5 Analysis Algorithms Before the average parallelism and latency can be calculated, every possible scenario needs to be generated out of the state space. A scenario is one path through the state space that represents a loop. Hence, there must be a transition between the first and the last state in the scenario. Because the state space tends to be large for non-trivial problems, generating the scenarios out of the state space is time consuming. Furthermore, care needs to be taken about memory. Hence, scenarios need to be created in a depth-first approach. Nevertheless, creating the scenarios can easily require about 3 GiB of RAM. For each scenario, the average parallelism and the latency between two previously configured processes are calculated. We implemented the two metrics as defined in [40]. #### 6.5.1 Latency Latencies can occur everywhere between the sub-systems. Our evaluation here analyzes the latency between the tracking system and the renderer. First, all transitions where the tracker sends a message to the renderer are searched. Afterwards, the next transition in the scenario where the tracker performs a simulation step is searched. Then the time span between both transitions is calculated. Usually, a scenario contains more of one transition that represents sending a message from the tracker to the renderer. Hence, the latency for each message can be calculated in parallel. Nevertheless, it remains very time-consuming. #### 6.5.2 Average Parallelism The average parallelism reflects how many processes in average perform work at the same time. The average parallelism is measured as the Degree of Parallelism (DOP). The DOP states how many processes are performing work in parallel for any point in time \( t \). This can easily be determined from the transitions between the states in a scenario. The clock of the process in the source state represents the begin and the clock of the process in the destination state represents the end of the processing. Thus, two tuples are generated for each transition, where the clock of the process is used as a time stamp. All tuples can then be sorted by their time stamp. Using this list, the average parallelism can be calculated as described in [40]. Using the profiling tool of the target platform on some demo applications, we observed that many applications idle most of the time. Hence, the average parallelism would be less than 1.0 in this case. Therefore, we decided to ignore idle times, where no process performs work. This is also provided by the profiling tool. 7 Results We model-checked the use-cases CSS1-5 based on the implementation described in this article and compared the results to measurements achieved by profiling of the same use-cases. 7.1 Example Application and Preparation Figure 3b illustrates the workflow of our toolchain described in Figure 3a. The model checker is fed with the formal specification of the system including timing information. Absolute values for the latter is dependent of the application and the hardware it is executed on and hence is ideally measured first, e.g., using basic prototype implementations. For our measurement, Simulator X’s barrel stack benchmark application was used (Figure 6). In this application, a large barrel is created at start-up. Ten seconds later, a stack of barrels is spawned in front of the large barrel. Another 35 seconds later, the large barrel is pushed into the stack. A user inspects the whole scene in an immersive setup with a head-tracked camera. The application is terminated after 60 seconds overall run-time. The profiling tool of Simulator X was used to determine parallelism and latency. The first check of the model is done with a temporal precision of $10^{-3}$ (milliseconds). Afterward, the precision is increased to $10^{-4}$ and the model check is performed again. If the results of two consecutive checks are equal, the process is terminated and the final results are determined. Otherwise, the temporal precision is increased again for the next iteration. This iterative approach prevents an early state space explosion. In our test case, the results stabilized at a temporal granularity of $10^{-5}$. Hence, increasing the granularity to $10^{-6}$ (microseconds) did not change the results for latency and parallelism, but produced a very large state space. As was expected, the time consumed to generate the state space depends heavily on the temporal granularity and grows about exponentially with the state space. While the state space at $10^{-3}$ is generated in less than a second, generating it at $10^{-6}$ can take more than an hour. 7.2 Discussion The measured and estimated results are presented in Table 2 and visualized in figures 5a and 5b. For the latency the median is used for Table 2: Estimated and measured results for latency and parallelism for the five concurrency and synchronization schemes. The difference is calculated to the respective measured values. A positive value means that the model checker overestimated the value, while a negative one means that the value was underestimated. <table> <thead> <tr> <th>Scheme</th> <th>Latency estimated</th> <th>Latency measured</th> <th>Difference</th> <th>Parallelism estimated</th> <th>Parallelism measured</th> <th>Difference</th> </tr> </thead> <tbody> <tr> <td>CSS1</td> <td>28.1ms</td> <td>32.7ms</td> <td>-14.1%</td> <td>1.04</td> <td>1.28</td> <td>-18.8%</td> </tr> <tr> <td>CSS2</td> <td>-</td> <td>30.4ms</td> <td>-</td> <td>-</td> <td>1.87</td> <td>-</td> </tr> <tr> <td>CSS3</td> <td>24.1ms</td> <td>32.6ms</td> <td>-26.1%</td> <td>1.41</td> <td>1.29</td> <td>9.3%</td> </tr> <tr> <td>CSS4</td> <td>24.6ms</td> <td>33.6ms</td> <td>-26.8%</td> <td>1.64</td> <td>1.37</td> <td>19.7%</td> </tr> <tr> <td>CSS5</td> <td>7.9ms</td> <td>7.6ms</td> <td>3.9%</td> <td>1.08</td> <td>1.52</td> <td>-28.9%</td> </tr> </tbody> </table> Figure 5: Visualized results from Table 2. Figure 6: The barrel stack benchmark application of Simulator X. measured and estimated values. Using the median in the measured data is necessary, because the measured data also contains the initial bootstrapping of the application where assets were loaded which resulted in a few frames with extremely long rendering times. This initialization phase was excluded from model checking since it is in general not a critical phase for the usage of the application and it is prone to many non-determinisms, e.g., file access times, from the underlying system. Figure 5a reports the estimated latency to underestimate the measured latency with only one exception: the smallest difference of 3.9% occurred for CSS5, which results only to 0.3ms in absolute quantity. The largest difference of -26.8% occurred for CSS4. The goal to successfully report a reduced latency for CSS5 was predicted well by the estimation of the model checking and was confirmed by the measurement of the profiling. While model checking CSS2, a scenario with an overflowing queue at the renderer was found. Therefore, latency and parallelism was not calculated out of the state space. The renderer did not saturate while measuring. This problematic execution path did not occur during the measured profiling samples but was successfully detected using model checking. However, the problem can simply be reproduced by using an overly complex scene, that drops the frequency of the renderer in all cases below the frequency of the in-queue of the renderer will increase. This will first result in an increased latency, and later into an out-of-memory process termination. However, the detected problem is a special case and concurrency problems usually are much harder to reproduce intentionally. The estimated and measured parallelism is visualized in Figure 5b. The smallest difference is in CSS3 with an overestimation of 9.3%, while the largest difference is found for CSS5 with an underestimation of -28.9%. Unlike the results for the latency estimation, the model checker underestimated the parallelism in CSS1 and CSS5, while overestimating it in CSS3 and CSS4. Hence, there is not such a clean tendency of the direction of the difference as for the latency estimation. 8 Model Checking vs. Implementing and Profiling To estimate the costs of model checking in terms of effort, we reconstructed relevant data from sources such as commits to a git repository, calendar, and e-mail. Hence, this is a very rough estimate but it can give a first idea of the relative costs. The total days of all participants are summarized in Table 3. The synchronization layer to implement CSS1–CSS5 in the target framework Simulator X took roughly 2 weeks. To learn the concepts of model checking took 3 weeks and to learn Rebeca took 2 more weeks (here per person). Implementing the graphical editor including the code generator for Rebeca required 3 weeks. Creating the specification out of the designed system required 1 week. Checking and refining the model required 2 weeks as did implementation of the algorithms to analyze the state space. Hence, specific implementation, specification, and check took 30 days. The check alone only took 15 days. In contrast to the ef- Table 3: Costs in terms of efforts to learn about model checking, to develop tools, and to apply the method initially. <table> <thead> <tr> <th>Phase</th> <th>Task</th> <th>Effort (days)</th> <th>Σ (days)</th> </tr> </thead> <tbody> <tr> <td>Learning Model Checking</td> <td>In General</td> <td>15 pp</td> <td>25 pp</td> </tr> <tr> <td></td> <td>Rebeca</td> <td>10 pp</td> <td>15</td> </tr> <tr> <td>Specific Implementation</td> <td>Sync Layer</td> <td>15</td> <td></td> </tr> <tr> <td>Tool Development</td> <td>Graphical Editor and Code Generator</td> <td>15</td> <td></td> </tr> <tr> <td></td> <td>Analysis Algorithms</td> <td>10</td> <td>25</td> </tr> <tr> <td>Application of Model Checking</td> <td>Creating Specification</td> <td>5</td> <td></td> </tr> <tr> <td></td> <td>Checking and Refining</td> <td>10</td> <td>15</td> </tr> </tbody> </table> 9 Conclusion and Future Work This article explored the usefulness and applicability of model checking for estimating latency and degree of parallelism in asynchronous RIS applications. Nine model checking languages and tools were analyzed to find appropriate candidates for their application in the target area. Out of these nine candidates, Rebeca was chosen based on a set of target-specific criteria. Simulor X, a RIS framework based on message passing was identified as the target platform because it supports various concurrency schemes due to its asynchronous nature and its actor model. Five concurrency and synchronization schemes were then identified as use-cases typically found in the native or slightly modified version in many asynchronous RISs with similar sub-system requirements. These use-cases guided the implementation of concurrency and synchronization primitives for the target platform, and a graphical editor to specify the formal Rebeca model of the use-cases based on the developed primitives. We then compared the model-checking-based estimations of latency and degree of parallelism of all five use-cases against measured results achieved by profiling of a real-world application. In one scenario, the latency was predicted nearly correct, with the small difference of 0.3ms. In the other cases, the model checker always underestimated the latency with differences between -14.1% to -26.8%. The estimated values for the average parallelism spanned a larger interval between the maximum difference of -28.9% and the minimum difference of 9.3%. Both results are very encouraging and could be further optimized in future work. General tendencies like the underestimation of the latency prediction could be adjusted by a scaling factor. In general, the better the model captures the final system, the better the results will become. For example, our assumption about the slightly worse prediction of the degree of parallelism might be due to the increased non-determinisms affecting concurrency, from the scheduling of the underlying message-passing implementation and operating system, to cache-misses in combination with the enabled hyper-threading we used. To summarize these results given the initial research questions, we conclude: Q1: **Answer:** Yes, model checking can be applied to predict the behavior of a concurrent RIS with respect to the target properties of latency and degree of parallelism. Q2.1: **Answer:** Results by model checking vary in absolute quantity for the two target properties, still, this variance is sufficiently close to the measured results to be useful for post- as well as pre-optimizations of RIS architectures. Q2.2: **Answer:** Model Checking also identified a problematic scheme, that did not show-up during profiling. Due to the stochastic nature of a RIS and the underlying hardware platform, disadvantageous execution paths did not occur during measuring but here were found by the model checker. As a result, we propose that model checking provides a promising alternative to the state-of-the-art profiling of RIS architectures, especially with respect to the important RIS properties of latency and degree of parallelism. While the process of model checking is not free and will take time for itself, it has the potential to shorten the later development work and fine-tuning, which often is highly unpredictable and much more time-consuming than the model checking, and to reduce the non-determinisms of today's complex interplay of optimizations from all the different system layers. To decrease the deviation between measured and estimated results, we plan to extend the model with more details. We also consider to apply model checking to additionally relevant software quality aspects of RISs. We think this will be especially complex but also fruitful for effects such as non-deterministic memory reads and branch prediction. One promising technology to integrate these effects into the model is probabilistic model checking like it is supported by Probabilistic Timed Rebeca. Another future work is to extend model checking on cases where existing game engines such as Unity or Unreal are used. One idea is to simply abstract these systems as a sub-system in our approach to be checked as part of a larger system. Alternatively, we would like to model internal processes of these engine. Of course, the latter potentially would require information provided by the developers of such engines. Acknowledgements The authors wish to thank Ehsan Khamespanah and Pedram Merrikhi. References Fundam. Inform. [18] M. E. Latoeskhi and H. Tramberend. A scala-based actor-entity architec- [26] M. E. Latoeskhi and H. Tramberend. A scala-based actor-entity architec- [28] M. Di Luca. New Method to Measure End-to-End Delay of Vir- Fundam. Inform. [35] M. Mine. Characterization of end-to-end delays in head-mounted dis- [37] M. Mine. Characterization of end-to-end delays in head-mounted dis- [42] S. Reiffeld, H. Tramberend, and M. E. Latoeskhi. Profiling and bench- [46] C. Hewitt, P. Bishop, and R. Steiger. A universal modular ACTOR for-
{"Source-Url": "https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-model-checking.pdf", "len_cl100k_base": 10825, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33247, "total-output-tokens": 13539, "length": "2e13", "weborganizer": {"__label__adult": 0.0004911422729492188, "__label__art_design": 0.0005583763122558594, "__label__crime_law": 0.0003917217254638672, "__label__education_jobs": 0.0008034706115722656, "__label__entertainment": 0.00012135505676269533, "__label__fashion_beauty": 0.0002378225326538086, "__label__finance_business": 0.00028324127197265625, "__label__food_dining": 0.00040078163146972656, "__label__games": 0.0015316009521484375, "__label__hardware": 0.001953125, "__label__health": 0.0006194114685058594, "__label__history": 0.0004515647888183594, "__label__home_hobbies": 0.00011020898818969728, "__label__industrial": 0.0006809234619140625, "__label__literature": 0.0003254413604736328, "__label__politics": 0.0003592967987060547, "__label__religion": 0.0006923675537109375, "__label__science_tech": 0.096923828125, "__label__social_life": 9.369850158691406e-05, "__label__software": 0.007244110107421875, "__label__software_dev": 0.88427734375, "__label__sports_fitness": 0.0004439353942871094, "__label__transportation": 0.0008711814880371094, "__label__travel": 0.0002818107604980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56738, 0.03972]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56738, 0.43657]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56738, 0.89987]], "google_gemma-3-12b-it_contains_pii": [[0, 6757, false], [6757, 13207, null], [13207, 21002, null], [21002, 26888, null], [26888, 29338, null], [29338, 36086, null], [36086, 38318, null], [38318, 42777, null], [42777, 49125, null], [49125, 56738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6757, true], [6757, 13207, null], [13207, 21002, null], [21002, 26888, null], [26888, 29338, null], [29338, 36086, null], [36086, 38318, null], [38318, 42777, null], [42777, 49125, null], [49125, 56738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56738, null]], "pdf_page_numbers": [[0, 6757, 1], [6757, 13207, 2], [13207, 21002, 3], [21002, 26888, 4], [26888, 29338, 5], [29338, 36086, 6], [36086, 38318, 7], [38318, 42777, 8], [42777, 49125, 9], [49125, 56738, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56738, 0.13306]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
12789aa563e9468fec833cabba3d718b72de18b6
[REMOVED]
{"Source-Url": "http://www.cs.nyu.edu/~barrett/pubs/BRK+15.pdf", "len_cl100k_base": 12159, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 60803, "total-output-tokens": 15662, "length": "2e13", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.0004570484161376953, "__label__crime_law": 0.0005402565002441406, "__label__education_jobs": 0.0011739730834960938, "__label__entertainment": 0.0001023411750793457, "__label__fashion_beauty": 0.00023043155670166016, "__label__finance_business": 0.00032448768615722656, "__label__food_dining": 0.0005803108215332031, "__label__games": 0.0011959075927734375, "__label__hardware": 0.0010833740234375, "__label__health": 0.0008687973022460938, "__label__history": 0.0003974437713623047, "__label__home_hobbies": 0.00015914440155029297, "__label__industrial": 0.0007128715515136719, "__label__literature": 0.0004162788391113281, "__label__politics": 0.0004487037658691406, "__label__religion": 0.000667572021484375, "__label__science_tech": 0.1192626953125, "__label__social_life": 0.00012731552124023438, "__label__software": 0.00867462158203125, "__label__software_dev": 0.86083984375, "__label__sports_fitness": 0.0004394054412841797, "__label__transportation": 0.0008616447448730469, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55674, 0.04684]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55674, 0.32066]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55674, 0.85698]], "google_gemma-3-12b-it_contains_pii": [[0, 2770, false], [2770, 6603, null], [6603, 9850, null], [9850, 13283, null], [13283, 16989, null], [16989, 20608, null], [20608, 23182, null], [23182, 26490, null], [26490, 29840, null], [29840, 32638, null], [32638, 35978, null], [35978, 38159, null], [38159, 41271, null], [41271, 45082, null], [45082, 48455, null], [48455, 51948, null], [51948, 55674, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2770, true], [2770, 6603, null], [6603, 9850, null], [9850, 13283, null], [13283, 16989, null], [16989, 20608, null], [20608, 23182, null], [23182, 26490, null], [26490, 29840, null], [29840, 32638, null], [32638, 35978, null], [35978, 38159, null], [38159, 41271, null], [41271, 45082, null], [45082, 48455, null], [48455, 51948, null], [51948, 55674, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55674, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55674, null]], "pdf_page_numbers": [[0, 2770, 1], [2770, 6603, 2], [6603, 9850, 3], [9850, 13283, 4], [13283, 16989, 5], [16989, 20608, 6], [20608, 23182, 7], [23182, 26490, 8], [26490, 29840, 9], [29840, 32638, 10], [32638, 35978, 11], [35978, 38159, 12], [38159, 41271, 13], [41271, 45082, 14], [45082, 48455, 15], [48455, 51948, 16], [51948, 55674, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55674, 0.09009]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
1b4f02dccac1f7716c1416c0eaa85bf4752e2e25
[REMOVED]
{"Source-Url": "http://www.cl.cam.ac.uk/~ey204/teaching/ACS/R202_2011_2012/papers/S2_CBN_CDN/papers/ratnasamy_ngc_2001.pdf", "len_cl100k_base": 8533, "olmocr-version": "0.1.46", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38094, "total-output-tokens": 9790, "length": "2e13", "weborganizer": {"__label__adult": 0.0003840923309326172, "__label__art_design": 0.00044035911560058594, "__label__crime_law": 0.0003628730773925781, "__label__education_jobs": 0.0010967254638671875, "__label__entertainment": 0.0004334449768066406, "__label__fashion_beauty": 0.00017392635345458984, "__label__finance_business": 0.0003876686096191406, "__label__food_dining": 0.0004925727844238281, "__label__games": 0.00159454345703125, "__label__hardware": 0.004703521728515625, "__label__health": 0.0006499290466308594, "__label__history": 0.0005679130554199219, "__label__home_hobbies": 0.00010836124420166016, "__label__industrial": 0.0005645751953125, "__label__literature": 0.0005192756652832031, "__label__politics": 0.0002803802490234375, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.451904296875, "__label__social_life": 0.00016057491302490234, "__label__software": 0.07763671875, "__label__software_dev": 0.45556640625, "__label__sports_fitness": 0.0004529953002929687, "__label__transportation": 0.0007390975952148438, "__label__travel": 0.000316619873046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41788, 0.02239]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41788, 0.50737]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41788, 0.91868]], "google_gemma-3-12b-it_contains_pii": [[0, 2858, false], [2858, 6188, null], [6188, 8019, null], [8019, 11035, null], [11035, 13924, null], [13924, 16714, null], [16714, 18418, null], [18418, 21493, null], [21493, 24866, null], [24866, 28012, null], [28012, 28780, null], [28780, 30989, null], [30989, 34228, null], [34228, 36276, null], [36276, 39428, null], [39428, 41788, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2858, true], [2858, 6188, null], [6188, 8019, null], [8019, 11035, null], [11035, 13924, null], [13924, 16714, null], [16714, 18418, null], [18418, 21493, null], [21493, 24866, null], [24866, 28012, null], [28012, 28780, null], [28780, 30989, null], [30989, 34228, null], [34228, 36276, null], [36276, 39428, null], [39428, 41788, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41788, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41788, null]], "pdf_page_numbers": [[0, 2858, 1], [2858, 6188, 2], [6188, 8019, 3], [8019, 11035, 4], [11035, 13924, 5], [13924, 16714, 6], [16714, 18418, 7], [18418, 21493, 8], [21493, 24866, 9], [24866, 28012, 10], [28012, 28780, 11], [28780, 30989, 12], [30989, 34228, 13], [34228, 36276, 14], [36276, 39428, 15], [39428, 41788, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41788, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
53c711b91477ba7e603535c27b186c71b1988b18
Query Rewriting for Existential Rules with Compiled Preorder Mélanie König, Michel Leclère, Marie-Laure Mugnier University of Montpellier, Inria, CNRS Montpellier, France Abstract We address the issue of Ontology-Based Query Answering (OBQA), which seeks to exploit knowledge expressed in ontologies when querying data. Ontologies are represented in the framework of existential rules (aka Datalog±). A commonly used technique consists in rewriting queries into unions of conjunctive queries (UCQs). However, the obtained queries can be prohibitively large in practice. A well-known source of combinatorial explosion are very simple rules, typically expressing taxonomies and relation signatures. We propose a rewriting technique, which consists in compiling these rules into a preorder on atoms and embedding this preorder into the rewriting process. This allows to compute compact rewritings that can be considered as “pivotal” representations, in the sense that they can be evaluated by different kinds of database systems. The provided algorithm computes a sound, complete and minimal UCQ rewriting, if one exists. Experiments show that this technique leads to substantial gains, in terms of size and runtime, and scales well on very large ontologies. We also compare to other tools for OBQA with existential rules and related lightweight description logics. 1 Introduction We address the issue of Ontology-Based Query Answering (OBQA), which seeks to exploit knowledge expressed in ontologies when querying data. We consider the novel framework of existential rules, also called Datalog± [Baget et al., 2011; Cali et al., 2012; Krötzsch and Rudolph, 2011]. Existential rules are an extension of function-free positive conjunctive rules that allows for existentially quantified variables in rule heads. These rules are able to assert the existence of unknown entities, a fundamental feature in an open-domain perspective, where it cannot be assumed that the description of data is complete. Interestingly, existential rules generalize Horn description logics (DLs), in particular lightweight DLs used in the context of OBQA, such as DL-Lite [Calvanese et al., 2005] and EL [Baader, 2003], which form the core of so-called tractable profiles of the Semantic Web ontological language OWL 2. They overcome some limitations of these DLs by allowing for non-tree structures as well as unbounded predicate arity. We consider here the basic database queries, i.e., conjunctive queries (CQs). There are two main approaches to query answering in presence of existential rules and Horn DLs. The first approach is related to forward chaining: it triggers the rules to build a finite representation of inferred data such that the answers can be computed by evaluating the query against this representation, e.g., [Calì et al., 2008; Thomazo et al., 2012]. The second approach, initiated by DL-Lite [Calvanese et al., 2005], is related to backward chaining: it rewrites the query using the rules such that the answers can be computed by evaluating the rewritten query against the data, e.g., [Baget et al., 2009; Gottlob et al., 2011] on existential rules. Some techniques combine both approaches [Kontchakov et al., 2011; Thomazo and Rudolph, 2014]. Each technique is applicable to a specific existential rule fragment. In this paper, we focus on the query rewriting approach, which has the advantage of being independent from the data. This technique typically outputs a union of conjunctive queries (UCQ), with the aim of benefiting from optimized relational database management systems (RDBMS). However, despite the good theoretical data complexity, first experiments exhibited a serious problem related to the size of the rewritten query, e.g., [Rosati and Almatelli, 2010]. Indeed, the output query can be exponentially larger than the initial query, even with very simple ontologies, and even if the output is extended to arbitrary first-order queries [Kikot et al., 2011; 2012]. This led to a significant amount of work on rewriting into more compact queries, which may remain first-order queries (hence expressible in SQL) or go beyond them (like Datalog programs, e.g., [Gottlob and Schwentick, 2012; Stefanoni et al., 2012]). Nowadays, mature systems have emerged for OWL 2 QL [Rodriguez-Muro et al., 2013; Civili et al., 2013], as well as prototypes for the EL family and more expressive DLs, e.g., [Eiter et al., 2012]. Existential rules are more complex to process. Entailment with general existential rules is even undecidable (e.g., [Beeri and Vardi, 1981]). However, expressive subclasses ensuring the existence of a UCQ rewriting are known, such as linear rules, which generalize most DL-Lite dialects, the sticky family, classes satisfying conditions expressed on a graph of role dependencies, and weakly recursive rules [Calì et al., 2009; 2012; Baget et al., 2011; Civili and Rosati, 2012]. A well-known source of combinatorial explosion are some very simple rules that typically express taxonomies or relation signatures. These rules are at the core of any ontology. We propose a new technique to tame this source of complexity. It relies on compiling these rules into a preorder on atoms and embedding this preorder into the rewriting process. Intuitively, each atom “represents” all its “specializations”. Hence, the query rewriting process avoids exploring many CQs and outputs a small UCQ. This UCQ can be seen as a “pivotal” representation, in the sense that it can be evaluated by different kinds of systems: it can be passed to a Datalog engine or unfolded into a positive existential query (a UCQ or a more compact form of query) to be processed by an RDBMS; it can also be directly processed if data is stored in main memory and the appropriate homomorphism operation is implemented. First, we prove that the new rewriting operator is sound and complete, and provide an algorithm that, given any CQ and any set of existential rules, effectively outputs a minimal, sound and complete UCQ-rewriting, if one exists (which may not be the case due to the undecidability of the problem). Since the computation of the preorder is independent from any query, we can divide this algorithm into a compilation step performed offline, and a query rewriting step, which takes the preorder as input. Second, we report experiments, which show that this optimization leads to substantial gains, in terms of size of the output and query rewriting runtime, and is able to scale on very large ontologies. In the case when the desired output is a “regular” UCQ, it remains faster for and is able to scale on very large ontologies. In the case when in terms of size of the output and query rewriting runtime, it remains faster for and is able to scale on very large ontologies. In the case when in terms of size of the output and query rewriting runtime, it remains faster for and is able to scale on very large ontologies. Definition 1 (Existential rule) An existential rule (or simply rule hereafter) is a formula \( R = \forall x \forall y (B[x, y] \rightarrow (\exists z H[y, z])) \) where \( B = \text{body}(R) \) and \( H = \text{head}(R) \) are conjunctions of atoms, resp. called the body and the head of \( R \). The frontier of \( R \) is the set of variables \( \text{vars}(B) \cap \text{vars}(H) = y \). The existential variables in \( R \) is the set of variables \( \text{vars}(H) \setminus \text{vars}(B) = z \). In the following, we will omit quantifiers in rules as there is no ambiguity. E.g. \( p(x, y) \rightarrow p(y, z) \) denotes the formula \( \forall x \forall y (p(x, y) \rightarrow \exists z p(y, z)) \). A knowledge base (KB) \( K = (F, R) \) is composed of a finite set of facts (which can be seen as a single fact) \( F \) and a finite set of existential rules \( R \). The CQ entailment problem is the following: given a KB \( K = (F, R) \) and a CQ \( Q \), does \( F, R \models Q \) hold? This problem is known to be undecidable in the general case [Beeri and Vardi, 1981]. It can be solved by computing a sound and complete rewritten query \( \tilde{Q} \) from \( Q \) and \( R \), provided that such a finite \( Q \) exists, then asking if \( F \models \tilde{Q} \). Here, the target query \( Q \) is a union of CQs (UCQ), that we see as a set of CQs, called a rewriting set of \( Q \). Definition 2 [Sound and Complete (rewriting) set of CQs] Let \( R \) be a set of existential rules, \( Q \) be a CQ, and \( F \) be a set of CQs. \( Q \) is said to be sound w.r.t. \( Q \) and \( R \) if for any fact \( F \), for all \( Q_i \in \mathcal{Q} \), if \( Q_i \) maps to \( F \) then \( F, R \models \tilde{Q} \) is a CQ. Reciprocally, \( Q \) is sound to be complete w.r.t. \( Q \) and \( R \) if for any fact \( F \), if \( F \models \tilde{Q} \) then there is \( Q_i \in \mathcal{Q} \) such that \( Q_i \) maps to \( F \). A set of rules \( R \) for which any query \( Q \) has a finite sound and complete set of rewritings (in other words, \( Q \) is rewriteable into a UCQ), is called a finite unification set (fus) [Baget et al., 2009]. Note that restricting a complete rewriting set \( Q \) to its most general elements preserves its completeness. A cover of \( Q \) is an inclusion-minimal set \( Q' \) of \( Q \) such that each element \( Q' \) of \( Q' \) is more specific than an element \( Q' \) of \( Q' \) (i.e., in database terms: \( Q \) is contained in \( Q' \)). If a set has a finite cover, then all its covers have the same size, hence all minimal, sound and complete rewriting sets have the same cardinality [Köng et al., 2012]. Hereafter, we, respectively, denote by \( R \), \( R \) and \( Q \) the considered set of rules, a rule of \( R \), and the initial query. We assume that rules have disjoint sets of variables, as well as \( Q \) and \( R \). The rewriting operation involves unifying part of \( Q \) and part of \( \text{head}(R) \). Existential variables in rule heads induce a possibly complex structure, therefore unification has to consider subsets of atoms at once (called “pieces”) instead of single atoms, hence the name piece-unifier. Indeed, if a variable \( x \) from \( Q \) is unified with an existential variable from \( \text{head}(R) \), then all atoms in which \( x \) occurs must be part of the unification, otherwise the obtained rewriting is unsound. Given \( Q' \subseteq Q \), we call separating variables of \( Q' \), the variables occurring in both \( Q' \) and \( (Q \setminus Q') \); the other variables from \( Q' \) are called non-separating. Let \( Q' \) be the unified part of \( Q \); only non-separating variables from \( Q' \) can be unified with an existential variable of the rule. Definition 3 (Piece-Unifier [König et al., 2012]) A piece-unifier of \( Q \) with \( R \) is a triple \( \mu = (Q', H', u) \), where \( Q' \neq \emptyset, Q' \subseteq Q \), \( H' \subseteq \text{head}(R) \) and \( u \) is a substitution of \( T = \text{terms}(Q') \cup \text{terms}(H') \) by \( T \), such that: 1. \( u(H') = u(Q') \); 2. for all existential variable \( x \in \text{vars}(H') \) and \( t \in T \), with \( t \neq x \), if \( u(x) = u(t) \), then \( t \) is a non-separating variable from \( Q' \). Example 1 (Piece-unification) Let \( R = \text{twin}(x, y) \rightarrow \text{motherOf}(z, x) \land \text{motherOf}(z, y) \), where \( z \) is an existential variable, expressing that twins have a common mother. Let \( Q_{no} = \{ \text{motherOf}(v, w), \text{painter}(v) \} \), asking if there is a mother who is a painter. The only candidate atom in \( Q_{no} \) is \( \text{motherOf}(v, w) \). If \( v \) would be unified with \( z \), hence \( \text{painter}(v) \) should be unified as well, which is impossible. Let \( Q_{yes} = \{ \text{motherOf}(v, w), \text{motherOf}(v, t), \text{female}(w), \text{male}(t) \} \), asking if there is a mother of a female and a male. A piece-unifier of \( Q_{yes} \) with \( \mu = (Q' = \{ \text{motherOf}(v, w), \text{motherOf}(v, t) \}, H' = \{ \text{motherOf}(z, x), \text{motherOf}(z, y) \}, u = \{ z \mapsto v, x \mapsto w, y \mapsto t \} \) (in the general case \( Q' \) and \( H' \) may be non-isomorphic). The one-step rewriting of \( Q_{yes} \) according to \( \mu \) is \( \{ \text{twin}(w, t), \text{female}(w), \text{male}(t) \} \), as formally defined below. Definition 4 (One-step Rewriting, \( \mathcal{R} \)-rewriting, \( \beta^* \)) Given a piece-unifier \( \mu = (Q', H', u) \) of \( Q \) with \( R \), the one-step rewriting of \( Q \) according to \( \mu \), denoted by \( \beta(Q, R, \mu) \), is the CQ \( u(\text{body}(R)) \cup u(Q \setminus Q') \). An \( \mathcal{R} \)-rewriting of \( Q \) is a CQ \( Q_s \) obtained by a finite sequence \( \{ Q_0 = Q \}, Q_1, \ldots, Q_k \) such that for all \( 0 \leq i < k \), there is a \( R_i \in \mathcal{R} \) and a piece-unifier \( \mu \) of \( Q_i \) with \( R_i \) such that \( Q_{i+1} = \beta(Q_i, R_i, \mu) \). We denote by \( \beta^*(Q, \mathcal{R}) \) the set of all \( \mathcal{R} \)-rewritings of \( Q \). It is known that \( \beta^*(Q, \mathcal{R}) \) is a sound and complete set (within the meaning of Def. 2) [Baget et al., 2011]. 3 Embedding a Preorder on Atoms We first explain the ideas underlying the rewriting technique before providing formal definitions. An essential component of any ontology is a hierarchy of concepts and, to a lesser extent, a hierarchy of relations (binary relations are also called roles or properties). In a logical framework, concepts and relations are represented by predicates. Simple rules of the form \( p(x_1, \ldots, x_k) \rightarrow q(x_1, \ldots, x_k) \), where \( k = 1 \) for a concept, express that \( p \) is more specific than \( q \) (notation \( p \preceq q \)). See e.g., the logical translation of atomic inclusions in DL, and the \text{subClassOf} and \text{subPropertyOf} assertions in RDFS/OWL. These rules, called hierarchical hereafter, are an obvious cause of combinatorial explosion in query rewriting, as illustrated by the next example. Example 2 Let \( R_1 \ldots R_n \) be rules of the form \( R_i : b_i(x) \rightarrow b_{i-1}(x) \). Let \( Q = \{ b_1(x_1), \ldots, b_n(x_k) \} \). Each atom \( b_0(x_0) \) in \( Q \) is rewritten into \( b_1(x_1) \), which in turn is rewritten into \( b_2(x_2) \), and so on. Thus, there are \((n + 1)^k\) rewritings of \( Q \). Hierarchical rules can be compiled into a (partial) preorder (i.e., a reflexive and transitive relation) on predicates, say \( \preceq \). Then, the homomorphism notion can be extended to take this preorder into account. Let us call \( \preceq \)-homomorphism \( h \) from a set of atoms \( A_1 \) to a set of atoms \( A_2 \) a substitution of \( A_1 \) by \( A_2 \) such that for all \( q(e_1, \ldots, e_p) \in A_1 \), there is \( p(h(e_1), \ldots, h(e_p)) \in A_2 \) with \( p \leq q \). It is easily checked that, given a set of hierarchical rules \( \mathcal{R} \), it holds that \( \mathcal{R} \cup \mathcal{R} = \mathcal{R} \) if and only if there is a \( \preceq \)-homomorphism from \( Q \) to \( F \). Now, let \( \mathcal{R} = \mathcal{R} \cup \mathcal{R} \) be a set of existential rules, where \( \mathcal{R} \) is composed of the hierarchical rules. Given the preorder \( \preceq \) associated with \( \mathcal{R} \), we would like to have \( \mathcal{R} \cup \mathcal{R} = \mathcal{R} \) if and only if there is an \( \mathcal{R} \)-rewriting of \( Q \) that maps to \( F \) by a \( \preceq \)-homomorphism. In Ex. 2, we would have \( b_j \preceq b_i \) for any \( i \leq j \); then, since all rules are hierarchical, we would get a single rewriting, i.e., \( Q \) itself, instead of exponentially many. However, to achieve completeness, we cannot simply rewrite \( Q \) with \( \mathcal{R} \) and forget about \( \mathcal{R} \), we have to embed the preorder into the rewriting process. This idea can be further extended to compile all rules with an atomic body, as long as they do not introduce existential variables. This allows to compile other common axioms, as illustrated by Ex. 3. However, since the atoms in a rule may have predicates of different arity and arguments in different positions, we cannot rely on a simple preorder on predicates anymore. We have to embed a preorder on atoms. Example 3 Let \( \mathcal{R}_{ex} \) be the following set of rules: - \( R_1 = r(x, y) \rightarrow t(x, y) \) (\( r \) is a specialization of \( t \)) - \( R_2 = s(x, y) \rightarrow t(y, x) \) - \( R_3 = r(x, y) \rightarrow s(y, x) \) (and \( t \) are inverse relations) - \( R_4 = r(x, y) \rightarrow q(x) \) (\( q \) is the domain of \( t \)) - \( R_5 = t(x, y) \rightarrow q(y) \) (\( q \) is the range of \( t \)) - \( R_6 = p(x, y, z) \rightarrow r(x, z) \) (\( r \) is a projection of \( p \)) - \( R_7 = p(x, z) \rightarrow s(x, z) \) (introduction of a “self loop”) A rule is said to be compilable if it has a single body atom, no existential variable and no constant.\(^1\) W.l.o.g. we also assume that a compilable rule has a single head. Definition 5 (Inferred Rule, Saturation) Let \( R_1 \) and \( R_2 \) be compilable rules such that \( \text{head}(R_1) \) and \( \text{body}(R_2) \) are unifiable by a (classical) most general unifier \( u \). The rule inferred from \( (R_1, R_2) \) is \( R_1 \bullet R_2 = u(\text{body}(R_1)) \cup u(\text{head}(R_2)) \). Given a set \( \mathcal{R} \) of compilable rules, the saturation of \( \mathcal{R} \), denoted by \( \mathcal{R}^* \), is the closure of \( \mathcal{R} \) by the \( \bullet \) operation. Example 4 The rules inferred from \( \mathcal{R}_{ex} \) (Ex. 3) are the following: we recall that rules have disjoint sets of variables, even if we use the same variables for simplicity: - \( R_1 \bullet R_2 = r(x, y) \rightarrow s(y, x); R_1 \bullet R_3 = r(x, y) \rightarrow q(x) \) - \( R_1 \bullet R_4 = r(x, y) \rightarrow q(y); R_2 \bullet R_2 = s(x, y) \rightarrow s(x, y) \) - \( R_2 \bullet R_3 = s(x, y) \rightarrow q(y); R_2 \bullet R_4 = s(x, y) \rightarrow q(x) \) - \( R_3 \bullet R_2 = t(x, y) \rightarrow t(y, x); R_3 \bullet R_1 = p(x, y, z) \rightarrow t(x, z) \) - \( R_5 \bullet R_2 = p(x, z) \rightarrow s(x, z) \) - \( R_5 \bullet R_1 = R_2 = p(x, y, z) \rightarrow s(x, z) \) - \( R_6 \bullet R_3 = p(x, y, z) \rightarrow q(z) \) - \( R_6 \bullet R_1 = R_3 = p(x, y, z) \rightarrow q(z) \) - \( R_6 \bullet R_2 = R_3 = R_6 \bullet R_4 = p(x, x, z) \rightarrow q(x) \) The size of \( \mathcal{R}^* \) is polynomial in the size of \( \mathcal{R} \) when the predicate arity is bounded (more specifically, when the number of occurrences of a variable in a predicate is bounded): \(^1\)The condition on constants is to simplify definitions. Definition 6 (Rule Subsumption) Let $R_i$ and $R_j$ be compilable rules. We say that $R_i$ subsumes $R_j$ if there is a homomorphism $h$ from body($R_i$) to body($R_j$) such that $h(\text{head}(R_i)) = \text{head}(R_j)$. Example 3 (cont’d) $R_2 \bullet R'_2$ and $R'_5 \bullet R_2$ are tautological rules. $R_6 \circ R_2 \circ R_3 = p(x, x, z) \rightarrow q(x)$ is subsumed by $R_5 \bullet R_1 \bullet R_3 = p(x, y, z) \rightarrow q(x)$. We first define a preorder $\preceq$ on atoms (and sets of atoms) that will be used to avoid query rewriting with $R_c$. Definition 7 ($\preceq$) Let $a$ and $b$ be atoms. We note $a \preceq b$ if (i) $a = b$ or (ii) there is a rule $R \in R_c^*$ that subsumes the rule $(a \rightarrow b)$ (equivalently: the application of $R$ to a yields exactly $b$). Let $A$ and $B$ be sets of atoms. We note $A \preceq B$ if there is a surjective mapping $f$ from $B$ to $A$ such that for all $b \in B$, $f(b) \preceq b$. In the above definition, note that $a \preceq b$ implies $\text{terms}(b) \subseteq \text{terms}(a)$, and the same holds for $A$ and $B$; moreover, if $a \preceq b$ and $a \neq b$, then the one-step rewriting of $b$ with $R$ may be strictly more general than $a$, as shown in Ex. 5. Example 5 Let $R = r(x, y) \rightarrow q(x)$, $a = r(c_1, c_2)$ and $b = q(c_1)$. The application of $R$ to a yields $b$, hence $a \preceq b$. The one-step rewriting of $b$ with $R$ is not a but $r(c_1, y)$. More generally, considering sets of atoms: Property 1 Let $A$ and $B$ be sets of atoms. It holds that $A \preceq B$ iff there is an $R_c$-rewriting $B'$ of $B$ that maps to $A$ by a substitution $s$ of $\text{vars}(B') \setminus \text{vars}(B)$ by terms($A$) such that $s(B') = A$. Example 6 Let $R_c^*$ from Ex. 3 and 4. Let $A = \{p(u, u, c_1), r(c_2, c_1)\}$ and $B = \{s(u, u), s(c_1, u), t(c_2, c_1), q(c_2)\}$, where $c_1$ and $c_2$ are constants. One has $A \preceq B$ since $p(u, u, c_1) \rightarrow s(u, u)$ is subsumed by $R_6$, $p(u, u, c_1) \rightarrow s(c_1, u)$ is subsumed by $R_5 \bullet R_1 \bullet R_2$, $r(c_2, c_1) \rightarrow t(c_2, c_1)$ is subsumed by $R_1 \bullet R_3$. According to Prop. 1, one can also check that $B' = \{p(u, u, z), p(u, y, c_1), r(c_2, c_1), r(c_2, y)\}$ is an $R_c$-rewriting of $B$ (by successively using rules $R_6$, $R_5$, $R_1$, $R_2'$, $R_1$, $R_3$, $R_3'$) and that $B'$ maps to $A$. Thanks to the preorder $\preceq$, we are now able to embed compiled rules in piece-unifiers. Definition 8 ($\preceq$-Piece-Unifier, $\preceq$-rewriting, $\beta_{\preceq}$) Given a preorder $\preceq$ on atoms, a $\preceq$-piece-unifier of $Q$ with $R$ is a triple $\mu = (Q', H', u)$ defined similarly to a piece-unifier (Def. 3), with Condition 1 replaced by $u(H') \preceq u(Q')$. The one-step $\preceq$-rewriting of $Q$ according $\mu$, denoted by $\beta_{\preceq}$, is the CQ $u(\text{body}(R)) \cup u(Q \setminus Q')$. Example 7 Consider the rules from Ex. 3, the rule $R = b(x) \rightarrow t(x, y)$ (where $y$ is existential) and the query $Q_1 = \{t(u, v), q(u)\}$. There is no piece-unifier of $Q_1$ with $R$ but there is a $\preceq$-piece-unifier $\mu = (Q_1, \text{head}(R), \{x \mapsto u, y \mapsto v\})$, since $R_4$ subsumes $t(u, v) \rightarrow q(u)$. We obtain $\beta_{\preceq}(Q_1, R, \mu) = \{b(u)\}$. Consider now $Q_2 = \{q(w), s(z, w), c(w)\}$. With the $\preceq$-piece-unifier $\mu' = ((q(w), s(z, w)), \text{head}(R), \{x \mapsto w, y \mapsto z\})$, using $R_3$ and $R_c$ we obtain $\beta_{\preceq}(Q_2, R, \mu') = \{b(w), c(w)\}$. Logical entailment between queries and facts can be computed by a homomorphism check, which we now extend to embed $\preceq$. Definition 9 ($\preceq$-homomorphism) Let $A$ and $B$ be sets of atoms. A $\preceq$-homomorphism from $B$ to $A$ is a substitution $h$ from $\text{vars}(B)$ to terms($A$) such that for all $b \in B$, there is $a \in A$ with $a \preceq h(b)$. Example 8 Consider $Q_1$ and $Q_2$ from Ex. 7. The substitution $h = \{\{(u, w), (v, z)\}\}$ is a $\preceq$-homomorphism from $Q_1$ to $Q_2$, Indeed, $t(u, v)$ and $q(v)$ are both mapped to $s(z, w)$, by using $R_2$ and $R_2 \bullet R_3$ respectively. Property 2 Let $A$ and $B$ be sets of atoms, and $R_c$ be a set of compilable rules with associated preorder $\preceq$. There is a $\preceq$-homomorphism from $B$ to $A$ if $\Phi(A), R_c = \Phi(B)$, where $\Phi$ assigns an existentially closed formula to a set of atoms. The next theorem states that the new rewriting operator $\beta_{\preceq}$, associated with $\preceq$-homomorphism, is logically sound and complete. Theorem 1 (Soundness and Completeness of $\beta_{\preceq}$) Let $K = (F, R)$ be a KB, where $R$ is partitioned into $R_c$ and $R_e$, a set of compilable rules with associated preorder $\preceq$. Let $Q$ be a CQ. Then, $F, R \models Q$ iff there is $Q' \in \beta_{\preceq}(Q, R_c)$ with a $\preceq$-homomorphism from $Q'$ to $F$. Proof: (Sketch) We rely on the soundness and completeness of classical piece-based rewriting and prove that: (1) if $Q' \in \beta_{\preceq}(Q, R_c)$ then $Q' \preceq \beta^*(Q, R)$ and (2) if $Q' \preceq \beta^*(Q, R)$ then there is $Q'' \in \beta_{\preceq}(Q, R_c)$ such that $Q' \preceq Q''$. □ 4 A Correct $\preceq$-Rewriting Algorithm The compilation step consists in partitioning $R$ into $R_c$ and $R_e$, and compiling $R_c$ into the preorder $\preceq$. Then, the rewriting step (Algorithm 1) follows the general schema of [König et al., 2012]. Given $R_c$, $\preceq$ and $Q$, the algorithm starts from the rewriting set $Q_F = \{Q\}$ and proceeds in a breadth-first manner. At each step, queries from $Q_F$ that have been generated at the preceding step (i.e., set $Q_F$) are explored. Exploring a query consists of computing the set of one-step rewritings of this query with all rules in $R_c$ (set $Q_c$). At the end of the step, only a cover of $Q_F \cup Q_c$ is kept (in case of equivalent queries, priority is given to $Q_F$ for termination reasons). Algorithm 1: $\preceq_e$-REWITING ALGORITHM Data: A set of existential rules $\mathcal{R}_e$, a preorder on atoms $\preceq$ and a conjunctive query $Q$ Result: A cover of the sound $\mathcal{R}_e\preceq\preceq_e$-rewritings of $Q$ $Q_F \leftarrow \{Q\}$; // resulting set $Q_E \leftarrow \{Q\}$; // queries to explore while $Q_E \neq \emptyset$ 1. $Q_t \leftarrow \emptyset$; // queries generated at this rewriting step 2. for $Q_i \in Q_E$ 3. for $R \in \mathcal{R}_e$ 4. for $\mu \preceq_{\text{piece-unifier}}$ of $Q_i$ with $R$ 5. $Q_i^e \leftarrow \text{ComputeCover}(Q_F \cup Q_i)$; // update cover 6. $Q_E \leftarrow Q^e \setminus Q_F$; // select unexplored queries 7. $Q_F \leftarrow \emptyset$; // queries generated at this rewriting step 8. return $Q_F$ Pairwise comparing all queries at each step may seem expensive since the comparison relies on a $\preceq$-homomorphism check. The point is to ensure the termination of the algorithm whenever a finite rewriting set exists: since a set of rewritings may be infinite and still have a finite cover, a cover has to be maintained at each step (or computed after a finite number of steps). Note, however, that for linear and sticky rules, this problem does not occur, and the cover could be computed only once at the end of the algorithm. Theorem 1 is not sufficient to ensure the completeness of the produced set. Indeed, one has to ensure that by pruning more specific queries at each step, the algorithm does not "miss" rewritings. A sufficient property is the so-called prunability of the rewriting operator [Konig et al., 2013]. Intuitively, this property ensures that for any $Q_2$ more specific than $Q_1$, the following holds: each one-step rewriting of $Q_2$ is either more specific than $Q_1$ itself, or than a one-step rewriting of $Q_1$; hence no rewriting is missed if $Q_2$ is removed from the rewriting set without being explored. This property can be formally expressed as follows for $\beta_{\preceq_e}$: Property 3 (Prunability) Let $Q_1$ and $Q_2$ be CQs. $R$ be a rule, and $\preceq$ be a preorder on atoms. If $Q_2$ is more specific than $Q_1$, i.e., $Q_1$ maps to $Q_2$ by a $\preceq$-homomorphism, then for all $\preceq_{\text{piece-unifier}}$ of $Q_2$ with $R$, either $\beta_{\preceq}(Q_2, R, \mu_2)$ is more specific than $Q_1$, or there is a $\preceq_{\text{piece-unifier}}$ of $Q_1$ with $R$ such that $\beta_{\preceq}(Q_2, R, \mu_2)$ is more specific than $\beta_{\preceq}(Q_1, R, \mu_1)$. With the previous property and Theorem 1, we can prove the correctness of the algorithm: Theorem 2 Algorithm 1 outputs a (minimal) sound and complete finite rewriting set (with respect to $\preceq$-homomorphism), if such exists, and it does not terminate otherwise. Algorithm 1 stops exactly when a UCQ-rewriting exists for the input set of rules and query (a sufficient condition being that the set of rules is fus). 5 Query Evaluation The rewriting set $Q$ produced by Algorithm 1 can be seen as a "pivotal" representation, in the sense that it can be transformed into different kinds of queries, depending on the type of data storage and the applicative context. Obviously, $Q$ can be directly evaluated with an adequate implementation of $\preceq$-homomorphism in the case the data can be loaded in main memory. Otherwise, the set $Q \cup \mathcal{R}_e$ can be straightforwardly translated into a Datalog query. A mixed approach can be adopted with $\mathcal{R}_e$ being used to saturate the data, and $Q$ being evaluated over the saturated data. One may even assume that all information that could be inferred by compilable rules is already present in the data, and delegate the encoding of this information to the database manager. This notion is called $H$-completeness in [Rodriguez-Muro et al., 2013] in the specific context of DL ABoxes. In particular, if $\mathcal{R}_e$ is composed solely of hierarchical rules and the data are stored in a RDBMS, semantic index techniques allow to avoid the effective computation of saturation [Rodriguez-Muro and Calvanese, 2012]. When partial saturation of the data is not feasible, $Q$ may also be unfolded into a set of CQs (i.e., a UCQ) $Q'$. $Q'$ is obtained from $Q$ by adding, for each $Q \in Q$, all $Q'$ such that $Q' \preceq Q$ (then eliminating redundant queries). Our experiments (see the last section) show that it is more efficient to unfold $Q$ than to directly compute $Q'$. More compact forms of positive existential queries can be computed, for instance unions of semi-conjunctive queries (SCQs), which are conjunctions of disjunctions [Thomazo, 2013]: each CQ $Q \in Q$ is transformed into an SCQ by replacing each atom $a \in Q$ by the disjunction of all atoms $a' \preceq a$. 6 Related Work Since the seminal paper on DL-Lite [Calvanese et al., 2005], a significant amount of work has been carried out on query rewriting algorithms, mainly for DL-Lite, but also for other Horn-DLs. The work closes to ours is certainly the tree-witness (tw) rewriting algorithm for DL-Lite [Kikot et al., 2012] because of the similarities between tw-rewritings and pieces. Another similarity is that tw-rewriting can make the assumption that the database is already saturated with inferrable knowledge that does not involve existential variables (H-completeness assumption). In the context of DL-Lite, this kind of knowledge corresponds exactly to compilable rules (up to the usual DL restrictions: predicate arity bounded by two, no multiple occurrences of variables in atoms). Hence, one could see our technique as an extension of tw-rewriting with the H-completeness assumption to existential rule KBs. However, the underlying techniques are quite different. Moreover, tw-rewriting heavily exploits the specificities of DL-Lite. First, "DL-Lite rules" without existential variables are necessarily compilable rules. Second, the "anonymous part" of the possibly infinite canonical model of a DL-Lite knowledge base is a set of trees (instead of a hypergraph with any structure for existential rules). This allows for a smart technique that rewrites the query in a single pass (instead of possibly exponentially many passes with fus rules). Regarding general existential rules, two rewriting methods were proposed [Baget et al., 2009; Gottlob et al., 2011] and respectively implemented in Alaska/PURE [König et al., 2012] and Nyaya [Virgilio et al., 2012]. 7 Experiments Our algorithm was implemented in Java, as an extension of the query rewriting prototype PURE. All tests were performed on a Dell machine with a processor at 3.60 GHz and 16 GB of RAM. As benchmarks dedicated to existential rules are not available yet, and in order to compare with other tools producing UCQs, which are mostly restricted to DL-Lite, we considered rule bases obtained by translation of DL-Lite ontologies: first, the widely used benchmark introduced in [Pérez-Urbina et al., 2009] (i.e., ADOLENA (A), STOCKEXCHANGE (S), UNIVERSITY (U) and VICIODI (V)); second, very large ontologies built from OpenGalen2 (G) and OBOProtein (O), and used in [Trivela et al., 2013], which respectively contain more than 53k and 34k rules, with 54% and 64% of compilable rules. Each ontology is provided with 5 handcrafted queries. Timeout was set to 10 minutes. Due to space limitation, we list only parts of the experiments. We first evaluated the impact of rule compilation on the rewriting process, w.r.t. rewriting sizes and runtime respectively. We denote by $\text{PURE}_C$ the extension of PURE and call pivotal UCQ its output. Table 1 shows the size of the UCQ (we recall that it is the same for all systems outputting a minimal, sound and complete UCQ), the size of the pivotal UCQ produced by $\text{PURE}_C$, and the number of generated queries during the rewriting process for PURE and $\text{PURE}_C$. Missing values are due to timeouts. We find a huge gap between the sizes of the output; the pivotal UCQ is often restricted to a single CQ even when the UCQ has thousands of CQs (which also explains the gap between the numbers of generated queries). Unsurprisingly, the results on the runtimes lead to similar observations (Table 2, Columns 1 and 2). We see that $\text{PURE}_C$ remains faster than PURE when we include the time required to unfold the pivotal UCQ into a UCQ (Table 2, Column 3), except for $Q_2$ on O, which comes from the fact that the pivotal UCQ is almost as large as the UCQ. Note that we implemented a brute-force unfolding method, which removes redundant CQs only at the end by a pairwise comparison of queries. Almost all the unfolding time is actually spent in checking redundancies. Nevertheless, the size of the UCQ obtained for some queries (up to more than 30000 CQs on O) clearly advocates for more compact forms of output. We also compared to other query rewriting tools, namely: PURE and Nyaya, which are the only tools processing existential rules, as well as some well-known DL tools: Requiem [Pérez-Urbina et al., 2009] (optimized “full modality”), Iqaros [Imprailou et al., 2012], Rapid [Chortaras et al., 2011] and tw-rewriting (part of the Ontop OBDA system [Rodriguez-Muro et al., 2013]). We emphasize again that these DL tools exploit the specificities of DL-Lite, specially the most recent ones, Rapid and tw-rewriting, whereas the algorithms of Nyaya and PURE are designed for general existential rules. Despite this fact, $\text{PURE}_C$ (without or with unfolding) scales well on DL-Lite large ontologies (except for extreme cases which are difficult for all tools, see Ontology O). Globally, $\text{PURE}_C$ behaves similarly to tw-rewriting and Rapid. If we restrict the comparison to classical UCQ output, the fastest tools are undeniably tw-rewriting and Rapid, followed by $\text{PURE}_C$ with unfolding. The difficulties of Nyaya on A can be explained by the fact that A contains some rules with two atoms in the head, whereas Nyaya only processes rules with a single head; hence, it had to take as input the ontology obtained by decomposing these rules into single-head rules, which introduces new predicates, whereas PURE processes rules with any head size. Nyaya could not process the very large ontologies G and O, which also needed to be decomposed. We checked that all systems return exactly the same size of UCQ, hence the choice of a given query rewriting tool between those outputing a UCQ is irrelevant for the query evaluation step. We carried out additional experiments to compare... the evaluation of the pivotal UCQ over the data saturated by the compilable rules and the evaluation of the corresponding classical UCQ over the initial data (data generated with the modified LUBM generator [Lutz et al., 2013] and stored in a MySQL database). As expected, in a number of cases the system did not accept the classical UCQ because of its size, and in the other cases the pivotal UCQ was evaluated much more efficiently than the classical UCQ. Further work includes extending query rewriting techniques outside the fusi fragment, by exploiting datalog rewritability or combining with other paradigms for rules. Acknowledgments. This work was partially funded by the ANR project PAGODA (ANR-12-JS02-007-01). References
{"Source-Url": "http://www.ijcai.org/Proceedings/15/Papers/438.pdf", "len_cl100k_base": 10499, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 29158, "total-output-tokens": 13370, "length": "2e13", "weborganizer": {"__label__adult": 0.00043892860412597656, "__label__art_design": 0.0006632804870605469, "__label__crime_law": 0.0007920265197753906, "__label__education_jobs": 0.0026950836181640625, "__label__entertainment": 0.00025200843811035156, "__label__fashion_beauty": 0.00029206275939941406, "__label__finance_business": 0.000812530517578125, "__label__food_dining": 0.0005488395690917969, "__label__games": 0.0012159347534179688, "__label__hardware": 0.0007305145263671875, "__label__health": 0.0010099411010742188, "__label__history": 0.0006299018859863281, "__label__home_hobbies": 0.00018405914306640625, "__label__industrial": 0.0007171630859375, "__label__literature": 0.0015430450439453125, "__label__politics": 0.0005674362182617188, "__label__religion": 0.0008754730224609375, "__label__science_tech": 0.38037109375, "__label__social_life": 0.00024890899658203125, "__label__software": 0.039764404296875, "__label__software_dev": 0.564453125, "__label__sports_fitness": 0.00030303001403808594, "__label__transportation": 0.0007920265197753906, "__label__travel": 0.0003437995910644531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41042, 0.03355]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41042, 0.73799]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41042, 0.84479]], "google_gemma-3-12b-it_contains_pii": [[0, 4908, false], [4908, 10695, null], [10695, 18701, null], [18701, 24680, null], [24680, 30873, null], [30873, 35166, null], [35166, 41042, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4908, true], [4908, 10695, null], [10695, 18701, null], [18701, 24680, null], [24680, 30873, null], [30873, 35166, null], [35166, 41042, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41042, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41042, null]], "pdf_page_numbers": [[0, 4908, 1], [4908, 10695, 2], [10695, 18701, 3], [18701, 24680, 4], [24680, 30873, 5], [30873, 35166, 6], [35166, 41042, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41042, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9c51d594c3befda1375b32655aea26399b6d4bbd
Paving the Way for NFV: Simplifying Middlebox Modifications Using StateAlyzr Junaid Khalid, Aaron Gember-Jacobson, Roney Michael, Anubhavnidhi Abhashkumar, and Aditya Akella, University of Wisconsin—Madison https://www.usenix.org/conference/nsdi16/technical-sessions/presentation/khalid Paving the Way for NFV: Simplifying Middlebox Modifications using StateAlyzr Junaid Khalid, Aaron Gember-Jacobson, Roney Michael, Anubhav nidhi Abhashkumar, Aditya Akella University of Wisconsin-Madison Abstract Important Network Functions Virtualization (NFV) scenarios such as ensuring middlebox fault tolerance or elasticity require redistribution of internal middlebox state. While many useful frameworks exist today for migrating/cloning internal state, they require modifications to middlebox code to identify needed state. This process is tedious and manual, hindering the adoption of such frameworks. We present a framework-independent system, StateAlyzr, that embodies novel algorithms adapted from program analysis to provably and automatically identify all state that must be migrated/cloned to ensure consistent middlebox output in the face of redistribution. We find that StateAlyzr reduces man-hours required for code modification by nearly 20x. We apply StateAlyzr to four open source middleboxes and find its algorithms to be highly precise. We find that a large amount of, but not all, live state matters toward packet processing in these middleboxes. StateAlyzr's algorithms can reduce the amount of state that needs redistribution by 600-8000x compared to naive schemes. 1 Introduction Network functions virtualization (NFV) promises to offer networks great flexibility in handling middlebox load spikes and failures by helping spin up new virtual instances and dynamically redistributing traffic among instances. Central to realizing the benefits of such elasticity and fault tolerance is the ability to handle internal middlebox state during traffic redistribution. Because middlebox state is dynamic (it can be updated for each incoming packet) and critical (its current value determines middlebox actions), the relevant internal state must be made available when traffic is rerouted to a different middlebox instance [16, 26, 30]. Recognizing this, and given the high-overhead and poor efficiency of existing approaches for replicating and sharing application state [16, 24, 26], researchers have developed several exciting frameworks for transferring, cloning, or sharing live middlebox state across instances, e.g., OpenNF [16], FTMB [30], Split/Merge [26], Pico Replication [24], and StatelessNF [20]. However, for middleboxes to work with these frameworks, middlebox developers must modify, or at least annotate, their code to perform custom state allocation, track updates to state, and (de)serialize state objects. The central contribution of this paper is a novel, framework-independent system that greatly reduces the effort involved in making such modifications. Three factors make such modifications difficult today: (i) middlebox software is extremely complex, and the logic to update/create different pieces of state can be intricate; (ii) there may be 10s-100s of object types that correspond to state that needs explicit handling; and (iii) middleboxes are extremely diverse. Factors i and ii make it difficult to reason about the completeness or correctness of manual modifications. And, iii means manual techniques that apply to one middlebox may not extend to another. Our own experience in modifying middleboxes to work with OpenNF [16] underscores these problems. Making even a simple monitoring appliance (PRADS [6], with 10K LOC) OpenNF-compliant took over 120 man-hours. We had to iterate over multiple code changes and corresponding unit tests to ascertain completeness of our modifications; moreover, the process we used for modifying this middlebox could not be easily adapted to other more complex ones! These difficulties significantly raise the bar for the adoption of these otherwise immensely useful state handling frameworks. To reduce manual effort and ease adoption, we develop StateAlyzr, a system that relies on data and control-flow analysis to automate identification of state objects that need explicit handling. Using StateAlyzr’s output, developers can easily make framework-compliant changes to arbitrary middleboxes, e.g., identify which state to allocate using custom libraries for [20, 24, 26], determine where to track updates to state [16, 26, 30], (de)serialize relevant state objects for transfer/ cloning [16], and merge externally provided state with internal structures [16, 24]. In practice we find StateAlyzr to be highly effective. For example, leveraging StateAlyzr to make PRADS OpenNF-compliant took under 6 man-hours of work. Importantly, transferring/cloning state objects identified with StateAlyzr is provably sound and precise. The former means that the aggregate output of a collection of instances following redistribution is equivalent to the output that would have been produced had redistribution not occurred. The latter means that StateAlyzr identifies minimal state to transfer so as to ensure that redistribution offers good performance and incurs low overhead. However, achieving high precision without compro- mising soundness is challenging. Key attributes of middlebox code contribute to this: e.g., numerous data structures and procedures, large callgraphs, heavy use of (multi-level) pointers, and indirect calls to packet processing routines that modify state (See Table 2). To overcome these challenges, StateAlyzr cleverly adapts program analysis techniques, such as slicing [18, 33] and pointer analysis [9, 31], to typical middlebox code structure and design patterns, contributing new algorithms for detailed classification of middlebox state. These algorithms can automatically identify: (i) variables corresponding to state objects that pertain to individual or groups of flows, (ii) the subset of these that correspond to state objects that can be updated by an arbitrary incoming packet at runtime, (iii) the flow space corresponding to a state object, (iv) middlebox I/O actions that are impacted by each state object, and (v) objects updated at runtime by an incoming packet. To evaluate StateAlyzr, we both prove that our algorithms are sound (Appendix B) and use experiments to demonstrate precision and the resultant impact on the efficiency of state transfer/cloning. We run StateAlyzr on four open source middleboxes—Passive Real-time Asset Detection System (PRADS) [6], HAProxy load balancer [2], Snort Intrusion Detection System [7], and OpenVPN gateway [5]—and find: - StateAlyzr’s algorithms improve precision significantly: whereas the middleboxes have 1500-18k variables, only 29-131 correspond to state that needs explicit handling, and 10-148 are updateable at run time. By automatically identifying updateable state, StateAlyzr allows developers to focus on the necessary subset of variables among the many present. StateAlyzr can be imprecise: 18% of the updateable variables are mis-labeled (they are in fact read-only), but the information StateAlyzr provides allows developers to ignore processing these variables. - Using StateAlyzr output, we modified PRADS and Snort to support fault tolerance using OpenNF [16]. We find that StateAlyzr reduces the manual effort needed. We could modify Snort (our most complex middlebox) and PRADS in 90 and 6 man-hours, respectively. Further, by helping track which flow-space an incoming packet belongs to, and which state objects it had updated, StateAlyzr reduces unneeded runtime state transfers between the primary and backup instances of PRADS and Snort by 600× and 8000× respectively compared to naive approaches. - StateAlyzr can process middlebox code in a reasonable amount of time. Finally, it helped us identify important variables that we missed in our earlier modifications to PRADS, underscoring its usefulness. 2 Motivation A central goal of NFV is to create more scalable and fault tolerant middlebox deployments, where middleboxes automatically scale themselves in accordance with network load and automatically heal themselves when software, hardware, or link failures occur [4]. Scaling, and possibly fault tolerance, requires launching middlebox instances on demand. Both require redistributing network traffic among instances, as shown in Figure 1. 2.1 Need for Handling State Middlebox scaling and failure recovery should be transparent to end-users and applications. Key to ensuring this is maintaining output equivalence: for any input traffic stream, the aggregate output of a dynamic set of middlebox instances should be equivalent to the output produced by a single, monolithic, always-available instance that processes the entire input [26]. The output may include network traffic and middlebox logs. As shown in prior works [16, 26, 30], achieving output equivalence is hard because middleboxes are stateful. Every packet the middlebox receives may trigger updates to multiple pieces of internal state, and middlebox output is highly dependent on the current state. Thus, malfunctions can occur when traffic is rerouted to a middlebox instance without the relevant internal state being made available at the instance. Approaches like naively rerouting newly arriving flows or forcibly rerouting flows with pertinent state can violate output equivalence. The reader is referred to [16, 24] for a more formal treatment of the need to handle internal state. 2.2 Approaches for Handling State Traditional approaches for replicating and sharing application state are resource intensive and slow [16, 24, 26]. Thus, researchers have introduced fast and efficient frameworks that transfer, clone, or share live internal middlebox state across instances. Examples include: Making the above modifications to middleboxes is difficult because middlebox code is complex. As shown in Table 2, several popular middleboxes have between 60K and 275K lines of code (LOC), dozens of different structures and classes, and, in some cases, complex event-based control flow. If a developer misses a change to some structure, class, or function, then output equivalence may be violated under certain input patterns, and a middlebox may fail in unexpected ways at run time. FTMB is the only system that aims to avoid such problems. It automatically modifies middleboxes using LLVM [3]. However, there are two problems: (i) developers must still manually specify which variables may contain/point-to cross-flow state; (ii) the tool is limited to Click-based middleboxes [21]. ### 2.3 Simplifying Modification and its Requirements Making the aforementioned changes to even simple middleboxes can take numerous man-hours as our own experience with OpenNF suggests. This is a serious barrier to adopting any of the previously mentioned systems. A system that can automatically identify what state a middlebox creates, where the state is created, and how the state is used could be immensely helpful in reducing the man-hours. It can provide developers guidance on writing custom state allocation routines, and on adding appropriate state filtering, serialization, and merging functions. Thus, it would greatly lower the barriers to adopting the above frameworks. Building such a system is challenging because of soundness and precision requirements. Soundness means that the system must not miss any types, storage locations, allocations, or uses of state required for output equivalence. A precise system identifies the minimal set of state that requires special handling to ensure state handling at run-time is fast and low-overhead. ### 2.4 Options Well-known program analysis approaches can be applied to identify middlebox state and its characteristics. **Dynamic analysis.** We could use dynamic taint analysis [29] to monitor which pieces of state are used and modified while a middlebox processes some sample input. Unfortunately, the sample inputs may not exercise all code paths, causing the analysis to miss some state. We also find that such monitoring can significantly slow middleboxes down (e.g., PRADS [6] and Snort IDS [7] are slowed down > 10×). **Static analysis.** Alternatively, we could use symbolic execution [10] or data-/control-/flow analysis [15, 18]. Symbolic execution can be employed to explore all possible code paths by representing input and runtime state as a series of symbols rather than concrete values. We can then track the state used in each path. While this is sound, the complexity of most middleboxes (Table 2) makes it impossible to explore all execution paths in a tractable amount of time. For example, we symbolically executed PRADS—which has just 10K LOC—for 8 hours using S2E [10], and only 13% of PRAD’s code was covered. The complexity worsens exponentially for middleboxes with larger codebases. Recent advances in symbolic execution of middleboxes [14] do not help as they over come state space explosion by abstracting away middlebox state, which is precisely what we aim to analyze. --- 1. Abstract interpretation [12] is another candidate, but it suffers from the well known problem of incompleteness, i.e., it over-approximates the middlebox’s processing and may not identify all relevant state. In this paper, we make clever use of data-/control-flow analysis to automatically evaluate how to handle middlebox state. Naively applying standard data-/control-flow analysis identifies all variables as pertaining to 'state that needs handling' (e.g., variables pertaining to per-packet state, read-only state, and state that falls outside the scope of a flowspace of interest); if developers modify a middlebox to specially handle all these variables, it can result in arbitrarily poor runtime performance during redistribution. We show how middlebox code structure and design patterns can be used to design novel algorithms that employ static program analysis techniques in a way that significantly improves precision without compromising soundness. Our approach is general and does not assume use of any particular state management framework. 3 Overview of StateAlyzr Most middleboxes' code can be logically divided into three basic parts (Figure 2): initialization, packet receive loop, and packet processing. The initialization code runs when the middlebox starts. It reads and parses configuration input, loads supplementary modules or files, and opens log files. All of this can be done in the main() procedure, or in separate procedures called by main. The packet receive loop is responsible for reading a packet (or byte stream) from the kernel (via a socket) and passing it to the packet processing procedure(s). The latter analyzes, and potentially modifies, the packet. This procedure(s) reads/writes internal middlebox state to inform the processing of the current (and future) packet. Our approach consists of three primary stages that leverage this structure. In each stage we further refine our characterization of a middlebox's state. The stages and their main challenges are described next: 1) Identify Per-/Cross-Flow State. In the first stage, we identify the storage location for all per- and cross-flow state created by the middlebox. The final output of this stage is a list of what we call top-level variables that contain or indirectly refer to such state. Unlike state that is only used for processing the current packet, per-/cross-flow state influences other packets' processing. Consequently, the lifetime of this state extends beyond the processing a single packet. We leverage this property, along with knowledge of the relation between variable and value lifetimes, to first identify variables that may contain or refer to per-/cross-flow state. We improve precision by considering which variables are actually used in packet processing code, thereby eliminating variables that contain or refer to state that is only used for middlebox initialization. We call the remaining variables "top-level". The main challenge here is dealing with indirect calls to packet processing in event-based middleboxes (Figure 2), which complicate the task of identifying all packet processing code. We develop an algorithm that adapts forward program slicing [18] to address this challenge (§4.1). 2) Identify Updateable State. The second stage further categorizes state based on whether it may be updated while a packet is processed. If state is read-only, we can avoid repeated cloning (in Pico Replication and OpenNF), avoid unnecessary logging of accesses (FTMB), and allow simultaneous access from multiple instances (StatelessNF); all of these will reduce the frameworks' overhead. We can trivially identify updateable state by looking for assignment statements in packet processing procedures. However, this strawman is complicated by heavy use of pointers in middlebox code which can be used to indirect state update. To address this challenge we show how to employ flow-, context-, and field-insensitive pointer analysis [9, 31] (§4.2). 3) Identify States' Flowspace Dimensions. Finally, the third stage determines a state's flowspace: a set of packet header fields (e.g. src_ip, dest_ip, src_port, dest_port & proto) that delineate the subset of traffic that relates to the state. Flowspace must be considered when modifying a middlebox to use custom allocation functions [24, 26] or filter state in preparation for export [16]. It is important to avoid the inclusion of irrelevant header fields and the exclusion of relevant fields in a state's flowspace, because it impacts runtime correctness and performance, respectively. To solve this problem we developed an algorithm that leverages common state access patterns in middleboxes to identify program points where we can apply program chopping [27] to determine relevant header fields (§4.3). **Soundness.** In order for StateAlyzr to be sound it is necessary for these three stages to be sound. In Appendix B, we prove the soundness of our algorithms. **Assumptions about middlebox code.** Our proofs are based on the assumption that middleboxes use standard API or system calls to read/write packets and hashtables or link-lists to store state. These assumptions are not limitations of our analysis algorithms. Instead, they are made to ease the implementation of StateAlyzr. Our implementation can be extended to add additional packet read/write methods or other data structures to store the state. ## 4 StateAlyzr Foundations We now describe our novel algorithms for detailed state classification. To describe the algorithms, we use the example of a simple middlebox that blocks external hosts creating too many new connections (Figure 3). ### 4.1 Per-/Cross-Flow State Our analysis begins by identifying the storage location for all relevant per- and cross-flow state created by the middlebox. This has two parts: (i) exhaustively identifying persistent variables to ensure soundness, and (ii) carefully limiting to top-level variables that contain or refer to per-/cross-flow values to ensure precision. #### 4.1.1 Identifying Persistent Variables Because per-/cross-flow state necessarily influences two or more packets within/across flows, values corresponding to such state must be created during or prior to the processing of one packet, and be destroyed during or after the processing of a subsequent packet. Hence, the corresponding variables must be persistent, i.e., their values persist beyond a single iteration of the packet processing loop. In Figure 3, variables declared on lines 7 to 11 are persistent, whereas curr on line 61 is not. Our algorithm first identifies such variables. **Analysis Algorithm.** We traverse a middlebox’s code, as shown in Figure 4. The values of all global and static variables exist for the entire duration of the middlebox’s execution, so these variables are always persistent. Variables local to the loop-procedure—i.e., the procedure containing the packet processing loop—exist for the duration of this procedure, and hence the duration of the packet processing loop, so they are also persistent. Local variables of procedures that precede the loop-procedure on the call stack are also persistent, because the procedures’ stack frames last longer than the packet processing loop. However, these variables cannot be used ```plaintext 1 struct host { 2 uint ip; 3 int count; 4 struct host *next; 5 } 6 7 pcap_t *intPcap, *extPcap; 8 int threshold; 9 char *queue[100]; 10 struct host *hosts = NULL; 11 int main(int argc, char **argv) { 12 pthread_t thread; 13 intPcap = pcap_create(argc[0]); 14 extPcap = pcap_create(argc[1]); 15 threshold = atoi(argv[2]); 16 pthread_create(&thread,(void *)&processPacket); 17 } 18 19 int loopProcedure() { 20 while(1) { 21 struct pcap_pkthdr pcapHdr; 22 char * pkt = pcap_next(extPcap, &pcapHdr); 23 enqueue(pkt); 24 if (entry->count < threshold) 25 pcap_inject(intPcap, pkt, pcapHdr->caplen); 26 } 27 } 28 29 void enqueue(char * pkt){ 30 head = (head + 1)%100; 31 queue[head] = pkt; 32 } 33 34 char* dequeue(){ 35 int *index = &tail; 36 return queue[*index]; 37 } 38 39 void processPacket(){ 40 while(1){ 41 ifEmpty,Wait(); 42 char* pkt = dequeue(); 43 struct ethhdr *ethHdr= (struct ethhdr *)pkt; 44 struct iphdr *iphdr= (struct iphdr *)(ethHdr+1); 45 struct tcpHdr *tcpHdr= (struct tcpHdr *)(iphdr+1); 46 struct host *entry= lookup(iphdr->saddr, hosts); 47 if (NULL == host){ 48 struct host *new= malloc(sizeof(struct host)); 49 new->ip= iphdr->saddr; 50 new->next= hosts; 51 hosts= new; 52 } 53 if (tcpHdr->syn & & !tcpHdr->ack) 54 entry->count++; 55 } 56 lookup(uint ip) { 57 struct host *curr = hosts; 58 while (curr != NULL) { 59 if (curr->ip == ip) 60 return curr; 61 curr = curr->next; 62 } 63 } ``` Figures 3: Code for our running example. within the packet processing loop, or a procedure called therein, because the variables are out of scope. Thus we exclude these from our list of persistent variables, improving precision. The above analysis implicitly considers heap-allocated values by considering the values of global, static, and local variables, which can point to values on the heap. Values on the heap exist until they are explicitly freed (or the middlebox terminates), but their usable lifetime is lim- --- *To automatically detect packet processing loops, we use the fact that middleboxes read packets using standard library/system functions.* Identifying Persistent Variables. Figure 4: Identifying persistent variables 4.1.2 Limiting to Top-level Variables The above algorithm identifies a superset of variables that may be bound, or point, to per-/cross-flow state. It includes variables bound to state used in initialization for loading/processing configuration/signature files: e.g., variables intPcap and extPcap in Figure 3. Such variables don’t need handling during traffic redistribution; they can simply be copied when an instance is launched. To eliminate such variables and improve precision, the key insight we leverage is that, by definition, per-/cross-flow state which is used in such procedures. However, identifying all such variables is non-trivial, and missing variables impact analysis soundness. We considered a strawman approach of using call graphs to identify packet processing procedure. A call graph is constructed by starting at each procedure call within the packet processing loop, and classifying each appearing procedure as a packet processing procedure. However, this analysis does not capture packet processing procedures that are called indirectly. The Squid proxy, e.g., does initial processing of the received packet, then enqueues an event to trigger further processing through later calls to additional procedures. Hence the analysis may incorrectly eliminate some legitimate per-/cross-flow state which is used in such procedures. Thus, we need an approach that exhaustively considers the dependencies between the receipt of a packet and both direct and indirect invocations of packet processing procedures. Below, we show how system dependence graphs [15] and program slicing [18] can be used for this. A system dependence graph (SDG) consists of multiple program dependence graphs (PDGs) — one for each procedure. Each PDG contains vertices for each statement along with their data and control dependency edges. A data dependence edge is created between statements p and q if there is an execution path between them, and p may update the value of some variable that q reads. A control dependence edge is created if p is a conditional statement, and whether or not q executes depends on p. A snippet of the control and data edges for our example in Figure 3 is in Figure 6. Whereas control edges capture direct invocations of packet processing, we can rely on data edges to capture indirect procedure calls. For example, the dashed yellow lines in Figure 6 fail to capture invocation of the processPacket procedure on bottom right (because there is no control edge from the while loop or any of its subsequent procedures to processPacket). In contrast, we can follow the data edges, the dashed red line, to track such calls. Given a middlebox’s SDG, we compute a forward program slice from a packet receive function call for the variable which stores the received packet. A forward slice contains the set of statements that are affected by the value of a variable starting from a specific point in the program [18]. Most middleboxes use standard library/system functions to receive packets—e.g., pcap_next, or recv—so we can easily identify these calls and the variable pointing to the received packet. We consider any procedure appearing in the computed slice to be a packet processing procedure. For middleboxes which invoke packet receive functions at multiple points, we compute forward slices from every call site and take the union of the procedures appearing in all such slices. Values Used in Packet Processing Procedures. The second half of our algorithm (Figure 5, lines 7–12) focuses on identifying persistent values that are used within some packet processing procedure. We analyze each statement in the packet processing procedures. If the statement contains a persistent variable, then we mark that persistent variable as a top-level variable. ### 4.2 Updateable State Next, we delineate updateable top-level variables from read only variables to further improve precision. In Figure 3, variable head, tail, hosts and queue are updateable, whereas threshold is not. Because state is updated through assignment statements, one strawman choice here is to statically identify top-level variables on the left-hand-side (LHS) of assignment statements. In Figure 3, this identifies head, hosts and queue. However, this falls short due to aliasing, where multiple variables are bound to the same storage location due to the use of pointers [11]. Aliasing allows a value reachable from a top-level variable to be updated through the use of a different variable. Thus our strawman can mis-label top-level variables as read-only, compromising soundness. For example, tail is mislabeled in Figure 3, because it never appears on the LHS of assignment statements. But on line 38 index is updated which points to tail. **Analysis Algorithm.** We develop an algorithm to identify updateable top-level variable (Figure 7). Since we are concerned with variables whose (referenced) values are updated during packet processing, we analyze each assignment statement contained in the packet processing procedures identified in the first stage of our analysis (§4.1.2). If the assignment statement’s LHS contains a top-level variable, then we mark the variable as updateable (similar to our strawman). Otherwise, we compute the points-to set for the variable on the LHS and compare this with the set of updateable top-level variables and their points-to sets. A variable's points-to set contains all variables whose associated storage locations are reachable from the variable. To compute this set, we employ flow-, context-, and field-insensitive pointer analysis [9]. If the points-to set of the variable on the LHS contains a top-level variable, or has a non-null intersection with the points-to set of a top-level variable, then we mark the top-level variable as updateable. Due to limitations of pointer analysis, our algorithm may still mark read-only top-level variables as updateable. E.g., field insensitive pointer analysis can mark a top-level struct variable as updateable even if just one of its subfields is updateable. ### 4.3 State Flowspaces Finally, we identify the packet header fields that define the flowspace associated with the values of each top-level variable. Identifying too fine-grained of a flowspace for a value—i.e., more header fields than those that actually define the flowspace—is unsound; such an error will cause a middlebox to incorrectly filter out the value when it is requested by a middlebox state management framework [16, 20, 24, 26]. Contrarily, assuming an overly permissive flowspace (e.g., the entire flowspace) for a value hurts precision. To identify flowspaces, we leverage common middlebox design patterns in updating or accessing state. Middleboxes typically use simple data structures (e.g., a hash table or linked list) to organize state of the same type for different network entities (connections, applications, subnets, URLs, etc.). When processing a packet, a middlebox uses header fields\(^4\) to lookup the entry in the \(^4\)In cases where keys are not based on the packet header fields e.g. URL, a middlebox usually keeps another data structure to maintain the data structure that contains a reference to the values that should be read/updated for this packet. In the case of a hash table, the middlebox computes an index from the packet header fields to identify the entry pointing to the relevant values. For a linked list, the middlebox iterates over entries in the data structure and compares packet header fields against the values pointed to by the entry. \begin{verbatim} Input: pktProc, percrossflowVars Output: chop, flowspace keyedVars = {} foreach var in percrossflowVars do if Type(var) == pointer or Type(var) == struct then keyedVars = keyedVars \cup \{keyedVar\} foreach proc in pktProc do foreach loopStmt in LoopStmts(proc) do condVars = {} foreach var in Vars(loopStmt.condition) do if var in keyedVars or PointsTo(var) \cap keyedVars \neq \emptyset then for condStmt in ConditionalStmts(loopStmt.body) do for condVar in Vars(condStmt) do if condVar \neq var then condVars = condVars \cup \{condVar\} chop = Chop(sdg,pktVar,condVars) flowspace = ExtractFlowspace(chop) Figure 8: Identifying packet header fields that define a per-cross-flow variable's associated flowspace \end{verbatim} **Algorithm.** We leverage the above design patterns in our algorithm shown in Figure 8. In the first step (lines 2-4), if the top-level variable is a struct or a pointer, we mark it as a possible candidate for having a flowspace associated with it. This filters out all the top-level variables which cannot represent more than one entry; e.g., variables head and tail in Figure 3. We assume that middleboxes use hash tables or linked lists to organize their values,\(^5\) and that these data structures are accessed using: square brackets, e.g., \[ \text{entry = table[index];} \] or iteration\(^6\), e.g., \[ \text{while(entry->next!=null) { entry=entry->next; for(i=0; i<list.length; i++) {...} }} \] The second step is thus to identify all statements like these where a top-level variable marked above is on the right-hand-side (RHS) of the statement (square brackets or pointer arithmetic scenario) or in the conditional expression (iteration scenario). When square brackets or pointer arithmetic are used, we compute a *chop* between the variables in the access statement and the variable containing the packet returned by the packet receive procedure. A chop between a set of variables \(U\) at program point \(p\) and a set of variables \(V\) at program point \(q\) is the set of statements that (i) may be affected by the value of variables in \(U\) at point \(p\), and (ii) may affect the values of variables in \(V\) at point \(q\). Thus, the chop we compute above is a snippet of executable code which takes a packet as input and outputs the index or offset required to extract the value from the hashtable. In a similar fashion, when iteration is used, we identify all conditional statements in the body of the loop. We compute a chop between the packet returned by the packet receive procedure and the set of all the variables in the conditional expression which do not point to any of the top-level variables; in our example (Figure 3), the chop starts at line 24 and terminates at line 63. We output the resulting chops, which collectively contain all conditional statements that are required to lookup a value in a linked list data structure based on a flow space definition. Assuming that the middlebox accesses packet fields using standard system-provided structs (e.g., struct ip as defined in netinet/ip.h), we conduct simple string matching on the code snippets to produce a list of packet header fields that define a state's flowspace. ## 5 Enhancements Data and control flow analysis can help improve precision, but they have some limitations in that they cannot guarantee that exactly the relevant state and nothing else has been identified. In particular, static analysis cannot differentiate between multiple memory regions that are allocated through separate invocations of malloc from the same call site. Therefore, we cannot statically determine if only a subset of these memory regions have been updated after processing a set of packets. To overcome potential efficiency loss due to such limitations, we can employ custom algorithms that boost precision in specific settings. We present two candidates below. ### 5.1 Output-Impacting State In addition to the three main code blocks (Figure 2), middleboxes may optionally have packet and log output functions. These pass a packet to the kernel for forwarding and record the middlebox's observations and actions in a log file, respectively. These functions are usually called from within the packet processing procedure(s). In some cases, operators may desire output equivalence only for specific types of output. For example, an operator may want to ensure client connections are not broken when a NAT fails—i.e., packet output should be equivalent—but may not care if the log of NAT'd connec- tions is accurate. In such cases, internal state that only impacts non-essential forms of output does not need special handling during redistribution and can be ignored. To aid such optimizations, we develop an algorithm to identify the type of output that updateable state affects. We use two key insights. First, middleboxes typically use standard libraries and system calls to produce packet and log output: either PCAP (e.g., pcap_dump) or socket (e.g., send) functions for the former, and regular I/O functions (e.g., write) for the latter. Second, the output produced by these functions can only be impacted by a handful of parameters passed to these functions. Thus, we focus on the call sites of these functions, and their parameters. Algorithm. We use program slicing [18] to identify the dependencies between a specific type of output and updateable variables. We sketch the algorithm and delegate details to Appendix A. We first identify the call sites of packet or log output functions by checking each statement in each packet processing procedure (§4.1.2). Then we use the SDG produced in the first stage of our analysis (Figure 5) to compute a backward slice from each call site. Such a slice contains the set of statements that affect (i) whether the procedure call is executed, and (ii) the value of the variables used in the procedure call, such as the parameters passed to the output function. We examine each statement in a backward slice to determine whether it contains an updateable per-/cross-flow variable. Such variables are marked as impacting packet (or log) output. 5.2 Tracking Runtime Updates Developers aiming to design fault-tolerant middleboxes can use the algorithms in §4 and §5.1 to efficiently clone state to backup instances. For example, if traffic will be distributed among multiple instances in the case of failure, then only state whose flowspace overlaps with that assigned to a specific instance needs to be cloned to that instance. However, the potential performance gains from these optimizations may be limited due to constraints imposed by data/control-flow analysis. For example, our analysis can only identify whether a persistent variable’s value may be updated during the middlebox’s execution. If we can determine at runtime exactly which values are updated, and when, then we can further improve the efficiency of state cloning and speed up failover. To achieve higher precision, we must use (simple) run time monitoring. For example, we can track, at run time, whether part of an object is updated during packet processing. To implement this monitoring, we must modify the middlebox to set an “updated bit” whenever a value reachable from a top-level variable is updated during packet processing. Figure 9a shows such modifications, in red, for a simple middlebox. We create a unique ``` 1 struct conn tbl[1000]; // Assigned id 0 2 int count; // Assigned id 1 3 int tcpent; // Assigned id 2 4 char updated[1][ ]; 5 void main() { 6 while (1) { 7 char * pkt = recv(); 8 updated[1][ ] = 1; 9 count = count + 1; 10 struct *iphdr i = getIpHdr(pkt); 11 if (i->protocol == TCP) { 12 hdl(&tcpent, &tbl[hash(pkt)], getTcpHdr(pkt)); 13 } 14 void hdl(int * c, struct conn *s, struct tcphdr *t) { 15 updated[2] = 1; 16 c = c + 1; 17 updated[0] = 1; 18 s->flags = s->flags | t->flags; 19 if (t->flags & ACK) updated[0] = 1; // Pruned 20 updated[0] = t->acknum; 21 } 22 } ``` (a) Example middlebox code instrumented for update tracking at run time; statements in red are inserted based on our analysis (b) Annotated control flow graph used for pruning redundant updated-bit-setting (shaded) statements Figure 9: Implementing update tracking at run time updated bit for each top-level variable—there are three such variables in the example—and we set the appropriate bit before any statement that updates a value that may be reachable from the corresponding variable. We use the same analysis discussed in §4.2 to determine where to insert statements to set updated bits. For any statement where a top-level variable is updated, we insert a statement—just prior to the assignment statement—that sets the appropriate updated bit. However, this approach can add a lot more code than needed: if one assignment statement always executes before another, and they always update the same value, then we only need to set the updated bit before the first assignment statement. For example, line 21 in Figure 9a updates the same compound value as line 18, so the code on line 20 is redundant. We use a straightforward control flow analysis to prune unneeded updated-bit-setting statements. First, we construct a control flow graph (CFG) for each modified packet processing procedure. Next, we perform a depth-first traversal of each CFG, tracking the set of updated bits that have been set along the path; as we traverse each edge, we label it with the current set of updated bits. Figure 9b shows this annotated CFG for the handleTcp procedure shown in lines 14–22 of Figure 9a. Lastly, for each updated-bit-setting statement in a procedure’s CFG, we check whether the bit being set is included in the label for every incoming edge. If this is true, then we prune the statement; e.g., we prune line 20 in Figure 9a. 6 Implementation We implement StateAlyzr using CodeSurfer [1] which has built-in support for constructing CFGs, performing flow- and context-insensitive pointer analysis, constructing PDGs/SDGs, and computing forward/backward slices and chops for C/C++ code. CodeSurfer uses proven sound algorithms to implement these analysis techniques. We use CodeSurfer’s Scheme API to access output from these analyses in our algorithms. We applied StateAlyzr to four middleboxes: PRADS asset monitoring [6] and Snort Intrusion Detection System [7], HAproxy load balancer [2], and OpenVPN gateway [5]. Fault Tolerance. We use the output from StateAlyzr to add fault tolerance to PRADS and Snort, both off-path middleboxes. We added code to both to export/import internal state (to a standby). We used the output of our first two analysis phases (§4.1 and §4.2) to know which top level variables’ values we need to export, and where in a hot-standby we should store them. We used the output of our third analysis phase (§4.3) as the basis for code that looks up per-/cross-flow state values. This code takes a flowspace as input and returns an array of serialized values. We use OpenNF [16] to transfer serialized values to a hot-standby. Similarly, import code deserializes and stores it in the appropriate location. We also implemented both enhancements discussed in §5. 7 Evaluation We report on the outcomes of applying StateAlyzr to four middleboxes. We address the following questions: - **Effectiveness**: Does StateAlyzr help with making modifications to today’s middleboxes? How many top-level variables do these middleboxes maintain, relative to all variables? What relative fractions of these pertain to state that may need to be handled during redistribution? How precise is StateAlyzr? - **Runtime efficiency and manual effort**: To what extent do StateAlyzr’s mechanisms help improve the runtime efficiency of state redistribution? How much manual effort does it save? - **Practical considerations**: Does StateAlyzr take prohibitively long to run (like symbolic execution; §2.4)? Is it sound in practice? 7.1 Effectiveness In Table 3, we present a variety of key statistics derived for the four middleboxes using StateAlyzr. We use this to highlight StateAlyzr’s ability to improve precision, thereby underscoring its usefulness for developers. The complexity of middlebox code is underscored by the overall number of variables in Table 3, which can vary between 1500 and 18k, and other relevant code complexity metrics shown in Table 2. Thus, manually identifying state that needs handling, and optimizing its transfer, is extremely difficult. We also note from Table 3 that StateAlyzr identifies 61-507 variables as persistent across the four middleboxes. A subset of these, 29-333, are top-level variables. Finally, 6-148 top-level variables are updateable; operators only have to deal with handling the values pertaining to these variables at run time. Snort is the most complex middlebox we analyze (~275K lines of code) and has the largest number of top-level variables (333); the opposite is true for PRADS (10K LOC and 29 top-level variables). The drastic reduction to the final number of updateable variables shows that naive approaches that attempt to transfer/clone values corresponding to all variables can be very inefficient at runtime. (We show this empirically in §7.2.) Even so, the number of updateable variables can be as high as 148, and attempting to manually identify them and argument code suitably can be very difficult. By automatically identifying them, StateAlyzr simplifies modifications; we provide further details in §7.2. Finally, the reductions we observe in going from persistent variables to top-level variables (16-53% reduction) and further to updateable ones (19-65% reduction) show that our techniques in §4.1 and §4.2 offer useful improvements in precision. In Figure 10, we characterize the flowspaces for the variables found in Snort and PRADS. From the left figure, we see that Snort maintains state objects that could be keyed by as many as 5 or 6 header fields; the maximum number of such fields for PRADS is 3. The figure on the right shows the number of variables that use a particular number of header fields as flowspace keys; for instance, in the case of Snort, 3 variables each are keyed on 1 and 6 fields. The total number of variables keyed on at least one key is 2 and 10 for Snort and PRADS, respectively (sum of the heights of the respective bars). These numbers are significantly lower than the updateable variables we discovered for these middleboxes (6 and 148, respectively). Digging deeper into Snort (for example) we find that: --- **Table 3: Variables and their properties** <table> <thead> <tr> <th>Mbox</th> <th>All</th> <th>Persistent</th> <th>Top-level</th> <th>Updateable</th> <th>pkg/log</th> <th>serializ.</th> </tr> </thead> <tbody> <tr> <td>PRADS</td> <td>1549</td> <td>61</td> <td>29</td> <td>10</td> <td>N.A. /6</td> <td>14</td> </tr> <tr> <td>Snort</td> <td>1893</td> <td>507</td> <td>333</td> <td>148</td> <td>N.A. /148</td> <td>176</td> </tr> <tr> <td>HAproxy</td> <td>7876</td> <td>272</td> <td>176</td> <td>111</td> <td>101 / 109</td> <td>59</td> </tr> <tr> <td>OpenVPN</td> <td>8704</td> <td>156</td> <td>131</td> <td>106</td> <td>97 / 102</td> <td>8</td> </tr> </tbody> </table> **Figure 10: Flowspace dims. of keyed per-/cross-flow vars** --- 248 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’16) USENIX Association 111 updateable variables pertain to all flows (i.e., a flowspace key of */*m). Of these, 59 variables are related to configurations and signatures, while 30 are function pointers (that point to different detection and processing plugins). These 89 variables can be updated from the command line at middlebox run time (when an operator provides new configurations and signatures, or new analysis plugins). - 27 updateable variables—or 18%—are only used for processing a single packet; hence they don’t correspond to per-/cross-flow state. This points to StateAlyzr’s imperfect precision. These variables are global in scope and are used by different functions for processing a single incoming packet, which is why our analysis labels them as updateable. A developer can easily identify these variables and can either remove them from the list of updateable variables or modify code to make them local in scope. 7.2 Runtime efficiency and manual effort 7.2.1 Fault Tolerant Middleboxes Using fault tolerant PRADS/Snort versions (§5), we show that StateAlyzr helps significantly cut unneeded state transfers, improving state operation time/overhead. **Man-hours needed.** Modifying PRADS based on StateAlyzr analysis took roughly 6 man-hours, down from over 120 man-hours when we originally modified PRADS for OpenNF (Two different persons made these modifications.). Modifying Snort, a much more complex middlebox, took 90 man-hours. In both cases, most of the time (> 90%) was spent in writing serialization code for the data structures identified by StateAlyzr (14 for PRADS and 176 for Snort; Table 3). Providing support for exporting/importing state objects according to OpenNF APIs took just 1 and 2 hours, respectively. **Runtime benefits.** We consider a primary/hot standby setup, where the primary sends a copy of the state to the hot standby after processing each packet. We use a university-to-cloud packet trace [17] with around 700k packets for our trace-based evaluation of this setup. The primary instance processes the first half of the trace file until a random point, and the hot standby takes over after that. We consider three models for operating the hot standby which reflect progressive application of the different optimizations in §4 and §5: (i) the primary instance sends a copy of all the updateable states to the hot standby, (ii) the primary instance only sends the state which applies to the flowspace of the last processed packet, and (iii) in addition to considering the flowspace, we also consider which top level variables are marked as updated for the last processed packet. Figure 11a shows the average case results for the amount of per packet data transferred between the primary and secondary instances for all three models for PRADS. Transferring state which only applies to the flowspace of the last processed packet, i.e., the second model, reduces the data transferred by 305× compared to transferring all per-/cross-flow state. Furthermore, we find that the third model, i.e., run time marking of updated state variables, further reduces the amount of data transferred by 2×, on average. This is because not all values are updated for every packet: the values pertaining to a specific connection are updated for every packet of that connection, but the values pertaining to a particular host and its services are only updated when processing certain packets. This behavior is illustrated in Figure 11b, which shows the size of the state transfer after processing each of the first 200 packets in a randomly selected flow. We measured the increase in per packet processing time purely due to the code instrumentation needed to identify state updates for highly available PRADS. We observed an average increase of 0.04μsec, which is around 0.14% of the average per packet processing time for unmodified PRADS. Figure 12 shows the corresponding results for Snort. Transferring just the updateable state results in a 8800× reduction in the amount of state transferred compared to transferring all per-/cross-flow state. This is because, a significant portion of the persistent state in Snort consists of configuration and signatures which are never updated during packet processing. Transferring state which only applies to a particular flowspace further reduces the data transfer by 2.75×. Unlike PRADS, the amount of state transfer in the second model remains constant for a particular flow because most of the state is created on the first few packets of a flow. Finally, runtime marking further reduces the amount of state transferred by 3.6×. **7.2.2 Packet/Log Output** Table 3 includes the number of variables that impact packet or log output. For on-path HAproxy (OpenVPN), 87% (91%) of updateable variables affect packet output; a slightly higher fraction impact log output. 95 (93) variables impact both outputs. A much smaller number impacts packet output but not log (6 and 4, respectively). Another handful impact logs but not packets (14 and 9); operators who are interested in just packet output consistency can ignore transferring the state pertaining to these variables, but the benefit will likely not be significant for... these middleboxes given the low counts. Being off-path, PRADS and Snort have no variables that impact packet output. For PRADS, 6 out of 10 updateable variables impact log output. StateAlyzr did find 4 other updateable variables—tos, tstamp, int_pkt, and mtu—but did not mark them as affecting packet output or log output. Upon manual code inspection we found that these values are updated as packets are processed, but they are never used; thus, these variables can be removed from PRADS without any impact on its output, pointing to another benefit of StateAlyzr—code clean-up. 7.3 Practicality Table 4 shows the time and resources required to run our analysis. CodeSurfer computes data and control dependencies and points-to sets at compile time, so the middleboxes take longer than normal to compile. This phase is also memory intensive, as illustrated by peak memory usage. Snort, being complex, takes the longest to compile and analyze (2.80s). This is not a concern since StateAlyzr only needs to be run once, and it runs offline. 7.3.1 Empirically Verifying Soundness Empirically showing soundness in practice is hard. Nevertheless, for the sake of completeness, we use two approaches to verify soundness of the modifications we make on the basis of StateAlyzr’s outputs. First, we use the experimental harness from §7.2. We compare logs at PRADS/Snort in the scenario where a single instance processes the complete trace file against concatenated logs of the primary and hot standby, using the trace and the three models as above. In all cases, there was no difference in the two sets of logs. Next, we compare with manually making all changes. Recall that we had manually modified PRADS to make it OpenNF-compliant. We compared StateAlyzr’s output for PRADS against the variables contained in the state transfer code we added during our prior modifications to PRADS. StateAlyzr found all variables we had considered in our prior modifications, and more. Specifically, we found that our prior modifications had missed an important compound value that contains a few counters along with configuration settings. 8 Other Related Work Aside from the works discussed in §2 and §4 [9, 16, 18, 22, 24, 25, 26, 28, 31, 33] StateAlyzr is related to a few other efforts. Some prior studies have focused on transforming non-distributed applications into distributed applications [19, 32]. However, these works aim to run different parts of an application at different locations. We want all analysis steps performed by a middlebox instance to run at one location, but we want different instances to run on a different set of inputs without changing the collective output from all instances. Dobrescu and Argyarko use symbolic execution to verify middlebox code satisfies crash-freedom, bounded-execution, and other safety properties [14]. They employ small, Click-based middleboxes [21] and abstract away accesses to middlebox state. In contrast, our analysis focuses on identifying state needed for correct middlebox operation and works with regular, popular middleboxes. Lorenzo et al. [13] use similar static program analysis techniques to identify flowspace, but their identification is limited to just hashtables. 9 Summary Our goal was to aid middlebox developers by identifying state objects that need explicit handling during redistribution operations. In comparison with today’s manual and necessarily error-prone techniques, our program analysis based system, StateAlyzr, vastly simplifies this process, and ensures soundness and high precision. Key to StateAlyzr is novel state characterization algorithms that marry standard program analysis tools with middlebox structure and design patterns. StateAlyzr results in nearly 20× reduction in manual effort, and can automatically eliminate nearly 80% of variables in middlebox code for consideration during framework-specific modifications, resulting in dramatic performance and overhead improvements in state reallocation. Ultimately, we would like to fully automate the process of making middlebox code framework-compliant, thus fulfilling the promise of using NFV effectively for middlebox elasticity and fault tolerance. Our work addresses basic challenges in code analysis, a difficult problem on its own which is necessary to solve first. Acknowledgments We thank our shepherd, Mona Attariyan, and the anonymous reviewers for their insightful feedback. This work is supported in part by National Science Foundation (grants References [29] E. J. Schwartz, T. Avgerinos, and D. Brumley. All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask). In IEEE Symposium on Security and Privacy, 2010. Appendix A. Output-Impacting State - Algorithm Figure 13 outlines the algorithm for identifying state that impacts packet/log output (from §5.1). ``` Input: sdg, updateableVars Output: pktoutputVars, logoutputVars 1 pktoutputVars = {} 2 logoutputVars = {} 3 foreach proc in pktProcs do 4 foreach stmt in Statements(proc) do 5 if stmt calls PKT_OUTPUT_FUNC 6 slice = BackwardSlice(sdg, stmt, Vars(stmt.RHS)) 7 foreach sliceStmt in Statements(slice) do 8 foreach var in Vars(sliceStmt) do 9 if var in updateableVars then 10 if stmt calls PKT_OUTPUT_FUNC then 11 pktoutputVars = pktoutputVars ∪ \{var\} 12 else 13 logoutputVars = logoutputVars ∪ \{var\} ``` Figure 13: Identifying output-impacting variables B. Proofs of soundness We now prove the soundness of our algorithms. Identifying Per-/Cross-Flow State Slicing [18] and pointer analysis [9] have already been proven sound. **Theorem 1.** If a middlebox uses standard packet receive functions, then our analysis identifies all packet processing procedures. **Proof.** For a procedure to perform packet processing: (i) there must be a packet to process, and (ii) the procedure must have access to the packet, or access to values derived from the packet. The former is true only after a packet receive function returns. The latter is true only if some variable in a procedure has a data dependency on the received packet. Therefore, a forward slice computed from a packet receive function over the variable containing (a pointer to) the packet will identify all packet processing procedures. **Theorem 2.** If a value is per-/cross-flow state, then our analysis outputs a top-level variable containing this value, or containing a reference from which the value can be reached (through arbitrarily many dereferences). **Proof.** Assume no top-level variable is identified for a particular per-/cross-flow value. By the definition, a per-/cross-flow must (i) have a lifetime longer than the lifetime of any packet processing procedure, and (ii) be used within some packet processing procedure. For a value to be used within a packet processing procedure, it must be the value of, or be a value reachable from the value of, a variable that is in scope in that procedure. Only global variables and the procedure’s local variables will be in scope. Since we identify statements in packet processing procedures that use global variables, and points-to analysis is sound [9], our analysis must identify a global variable used to access/update the value; this contradicts our assumption. This leaves the case where a local variable is used to access/update the value. When the procedure returns the variable’s value will be destroyed. If the variable’s value was the per-/cross-flow value, then the value will be destroyed and cannot have a lifetime beyond the packet processing procedure; this is a contradiction. If the variable’s value was a reference through which the per-/cross-flow value could be reached, then this reference will be destroyed when the procedure returns. Assuming a value’s lifetime ends when there are no longer any references to it, the only way for the per-/cross-flow value to have a lifetime beyond any packet processing procedure is for it be reached through another reference. The only such reference that can exist is through a top-level variable. Since points-to analysis is sound [9] this variable would have been identified, which contradicts our assumption. Identifying Updateable State **Theorem 3.** If a top-level variable’s value, or a value reachable through arbitrarily many dereferences starting from this value, may be updated during the lifetime of some packet processing procedure, then our analysis marks this top-level variable as updateable. **Proof.** According to the language semantics, scalar and compound values can only be updated via assignment statements. According to Theorem 1, we identify all packet processing procedures. Therefore, identifying all assignment statements in these procedures is sufficient to identify all possible value updates that may occur during the lifetime of some packet processing procedure. The language semantics also state that the variable on the left-hand-side of an assignment is the variable whose value is updated. Thus, when a top-level variable appears on the left-hand-side of an assignment, we know its value, or a reachable value, is updated. Furthermore, flow-insensitive context-insensitive pointer alias is provably guaranteed to identify all possible points-to relationships [9]. Therefore, any assignment to a variable that may point to a value also pointed to (indirectly) by a top-level variable is identified, and the top-level variable marked updateable. Identifying Flowspaces **Theorem 4.** If a middlebox uses standard patterns for fetching values from data structures, and the flowspace for a top-level variable's value (or a value reachable through arbitrarily many dereferences starting from this value) is not constrained by a particular header field, then our analysis does not include this header field in the flowspace fields for this top-level variable. **Proof.** A header field can only be part of a value's flowspace definition if there is a data or control dependency between that header field in the current packet and the fetching of an entry from a data structure. It follows from the proven soundness and precision of flow-sensitive context-insensitive pointer analysis [11] that the SDG will not include false data or control dependency edges. It also follows from the proven soundness of program slicing [18] that only data and control dependencies between source variables (i.e., the packet variable) and target variables (i.e., the index variable, increment variable, or variable in a conditional inside a loop) will be included in the chop. **Identifying Output-Impacting State **Theorem 5.** If a top-level variable's value, or a value reachable through arbitrarily many dereferences starting from this value, may affect a call to a packet output function or the output produced by the function, then our analysis marks this top-level variable as impacting packet output. **Proof.** Follows from SDG construction soundness [15, 18]. If/when a packet output function is called is determined by a sequence of conditional statements. The path taken at each conditional depends on the values used in the condition. Control and data dependency edges in a system dependence graph capture these features. Since SDG construction is sound [15, 18], we will identify all such dependencies, and thus all values that may affect a call to a packet output function. Only parameter values, or values reachable through arbitrarily many dereferences starting from these values, can affect the output produced by a packet output function. Thus, knowing what values a parameter value depends on is sufficient to know what values affect the output produced by an output function. Again, since SDG construction is sound, we will identify all such dependencies.
{"Source-Url": "https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-khalid.pdf", "len_cl100k_base": 13506, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 53738, "total-output-tokens": 15745, "length": "2e13", "weborganizer": {"__label__adult": 0.0002944469451904297, "__label__art_design": 0.0002081394195556641, "__label__crime_law": 0.0003767013549804687, "__label__education_jobs": 0.0004422664642333984, "__label__entertainment": 6.532669067382812e-05, "__label__fashion_beauty": 0.00011473894119262697, "__label__finance_business": 0.00025773048400878906, "__label__food_dining": 0.0002620220184326172, "__label__games": 0.000644683837890625, "__label__hardware": 0.00131988525390625, "__label__health": 0.00034546852111816406, "__label__history": 0.00021159648895263672, "__label__home_hobbies": 7.796287536621094e-05, "__label__industrial": 0.0003857612609863281, "__label__literature": 0.0001819133758544922, "__label__politics": 0.00026726722717285156, "__label__religion": 0.0003120899200439453, "__label__science_tech": 0.036102294921875, "__label__social_life": 7.31348991394043e-05, "__label__software": 0.01171875, "__label__software_dev": 0.9453125, "__label__sports_fitness": 0.0002410411834716797, "__label__transportation": 0.0004973411560058594, "__label__travel": 0.00018739700317382812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66393, 0.05726]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66393, 0.51457]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66393, 0.87197]], "google_gemma-3-12b-it_contains_pii": [[0, 289, false], [289, 5306, null], [5306, 9873, null], [9873, 13344, null], [13344, 17799, null], [17799, 22808, null], [22808, 26468, null], [26468, 30039, null], [30039, 35068, null], [35068, 40380, null], [40380, 45888, null], [45888, 51079, null], [51079, 55580, null], [55580, 59183, null], [59183, 63383, null], [63383, 66393, null]], "google_gemma-3-12b-it_is_public_document": [[0, 289, true], [289, 5306, null], [5306, 9873, null], [9873, 13344, null], [13344, 17799, null], [17799, 22808, null], [22808, 26468, null], [26468, 30039, null], [30039, 35068, null], [35068, 40380, null], [40380, 45888, null], [45888, 51079, null], [51079, 55580, null], [55580, 59183, null], [59183, 63383, null], [63383, 66393, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66393, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66393, null]], "pdf_page_numbers": [[0, 289, 1], [289, 5306, 2], [5306, 9873, 3], [9873, 13344, 4], [13344, 17799, 5], [17799, 22808, 6], [22808, 26468, 7], [26468, 30039, 8], [30039, 35068, 9], [35068, 40380, 10], [40380, 45888, 11], [45888, 51079, 12], [51079, 55580, 13], [55580, 59183, 14], [59183, 63383, 15], [63383, 66393, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66393, 0.01685]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
443dcc5ee2ee4e183c1e59708f219013648cb0a7
Managing schema evolution in a container-based persistent system J. Baltasar García Perez-Schofield Emilio García Roselló Tim B. Cooper Manuel Pérez Cotó 1 Facultad de Informática, Edif. Politécnico, s/n Campus As Lagesas (Univ. Vigo) 32004 Ourense, España (Spain). {jbgarciarosellompecota}@uvigo.es 2 Director of Smarts Pty Ltd, Level 1, George St., Sydney NSW 2000, Australia. {tco}@smarts.com.au Abstract. Managing schema evolution is a problem every persistent system has to cope with to be useful in practice. Schema evolution consists basically of supporting class modification and dealing with data objects created and stored under the old class definitions. Several proposals have been made to handle this problem in systems that follow a full orthogonally persistent approach, but, until now, there isn’t any proposal to support it in container-based persistent systems. In this paper we describe a schema evolution management system designed for Barbados. Barbados is a complete programming environment which is based on an architecture of containers to provide persistent storage. Barbados doesn’t provide full orthogonal persistence, but, as will be described in this paper, its architecture has other several advantages. Among them is that this model is especially suitable for solving the schema evolution problem. Keywords Persistence, OOOS, C++, Swizzling, Schema Evolution. Introduction A persistent system is a system in which data structures and data itself persist through executions of any process, as shown in reference 1. Under an object-oriented paradigm, the entities we need to make persistent are classes and objects, as explained in reference 2, and even compiled functions in some systems such as Barbados. A class is probably going to be changed many times along the software life cycle, due to software maintenance. In a non-persistent system, the class would be changed and then the programmer would have to manually write procedures (for example, at load time, converting data in the old format to the new one), in order to adapt all existing stored data to the new class structure. In a persistent system programmers don’t have to worry about data storage, therefore it’s the system itself which has to cope with the problem of objects being out-of-sync with their class definition. To solve this problem, there are three major approaches described in literature (for example, in reference 3): (a) the system can convert stored objects to its new type immediately or in deferred way; (b) support different versions of the software for the different versions of a class, or (c) simply prevent programmers from changing any data type, compelling users to define new types when needed. This last possibility is unacceptable most of the time. Related with the previous possibilities, we find two main different ways to cope with this problem in persistent systems like OODBMS, persistent object stores, or persistent programming languages: schema evolution and schema versioning, respectively. Schema versioning, which is explained deeply in reference 4, consists of supporting multiple user-defined version interfaces of an object, allowing other objects to link a concrete version of a given object. Although versioning can be found interesting for some domains (e.g. the ones discussed in references 5 and 6), it is not going to be discussed in this paper. When schema evolution is applied, in response to a class definition modification, the system has to modify existing class instances to adapt them to this new definition. This process is generally called conversion, as defined in reference 4. This change can be done in an eager (or immediate) way, or in a lazy (or deferred) way (reference 3 offers a detailed explanation of eager and lazy transformations in OODBMS). The first approach consists of changing all instances of the given class as soon as the class has been changed. The second approach means converting objects only when they are going to be used for the first time. Although schema evolution has been a focus of active research, it continues to be an unsatisfactorily solved problem as neither of these approaches is broadly suitable. The eager approach has the advantage that, once the conversion is finished, it leaves the persistent store in a consistent state. But this approach normally requires making the database go through an off-line state, possibly for a long time. The lazy approach is clearly a better one in terms of time and availability, and for that reason it has received a lot of attention in literature; references 7 and 8 are good examples. A disadvantage of the lazy approach is that instances of newer and older versions of a class can coexist together in time, and the system has to check objects every time they are loaded to convert them if necessary. In this paper we propose a solution to schema evolution for Barbados, a container-based persistent system. Through a technical discussion, it’s going to be shown that this kind of persistent system is particularly well suited for solving schema evolution thanks specifically to its container-based structure, explained in reference 9, which, despite being less orthogonal, as studied in reference 10, allows us to combine advantages of both approaches: the eager and lazy ones. The remaining sections of this article are structured as follows: firstly, Barbados and its persistence model are briefly introduced. Next, we present and discuss our design for supporting schema evolution in Barbados. Finally, conclusions are presented. References to related work are scattered through all the text, when appropriate. **The Barbados persistent system** Within the framework of the container-based persistent model, which is presented in reference 9, a persistent system called Barbados has been developed as a research prototype. Barbados is an object oriented persistent programming system, initially presented in reference 10, integrating persistent object storage and a development environment with a C++-based programming language, a compiler and an integrated debugger. Other systems with facilities similar to Barbados are Pjama, introduced in its schema evolution capabilities in reference 11, JSpin, in reference 12, or PerDis, introduced generally in reference 13. None of the previous systems offer a working integrated environment. When we focus on the architecture of the system, the most similar system is PerDis, although it is a middleware, and not a complete environment. JSpin and PJama are Java-based orthogonal persistent programming systems, thus the architecture and the supported language are different. But the main difference between Barbados and other persistent systems is the container-based structure of its persistent storage at the conceptual level. In almost all persistent systems there is always some kind of clustering to improve efficiency. If the fine-grained objects which compose data structures were scattered randomly over the secondary storage, then a persistent system would be unusable because each fine-grained object access would require a disk access and will therefore be unfeasibly slow\(^1\). Instead, clustering is used to group objects which are likely to be accessed together when saving data on the persistent store, as studied in reference 14. The policy of deciding which objects are to be grouped together can be based on classes hierarchy, composition relationships between objects, or even time of creation. Anyway, in orthogonal persistent systems clustering is made at the physical level, thus having the advantage of being transparent to the user. However, this policy has performance disadvantages, because transparent, adaptable and dynamic clustering management (a comparative study of various dynamic clustering techniques is found in reference 15). \(^1\)Although page caching and other techniques could help a lot in this situation. has shown to be hard to implement in an efficient way, as user intentions have to be anticipated. In Barbados, in turn, the clustering mechanism is based on user-managed containers, so it is not completely transparent, although a metaphor of directories has been built over the containers. A detailed explanation about the clustering mechanism in Barbados can be found in references 9 and 16. Therefore, programmers don’t have a flat storage model, but rather one based on containers of objects. The abstraction offered is very similar to a file system, in which a container behaves like a directory; so in the context of Barbados we employ either terms as synonyms. Programmers are required to explicitly perform the operations of creating, opening and deleting containers, except of course when these operations are hidden inside other commands, such as the ones built-in for directory management (for example, `cd()`, `mkdir()` ...). Furthermore, the user decides which container an object should be stored in, mainly by changing the default directory to be the desired one. The container is also Barbados’ unit of transfer between main memory and the persistent store. A container can be viewed as a Large Grained Object (LGO) which is a collection of FGO’s or Fine Grained Objects - a FGO simply means a class instance, i.e. any normal programming-language level object-. FGO’s are distributed in non-overlapping containers. A special FGO of each container is called the root, because all the remaining FGO’s are reachable from it. The root always consists of an object of the directory metaclass, as stated in references 9 and 16. This metaclass represents essentially an abstraction of a container, that is, a set of named objects. That’s the very same as a directory of a file system, which is basically a set of named files, although objects in Barbados are typically much smaller than files. Like in file-systems, directories in Barbados can be nested: if a directory A contains a named object also of type directory, then that object represents a subdirectory of the directory A. This happens, for example, when the user executes the `mkdir()` system function inside any container. The named objects inside a container are the ones which have a public name, which is global and unique when considered as part of the pair `(container_id, object name)`. Objects with a name have an entry in the directory (roughly, each directory is a container): this entry is represented by a `namedobj` object in the container, which points to the true object and holds its name. These named objects are the only ones which can be referenced from another container (through their `namedobj`'s). The rest of the container is not accessible from outside, and also, by default the named objects are only accessible in `read-only` mode when referred from another container. This structure is shown in figure 1, in which the `named_ob` objects are accessible, while the `ob` ones aren’t. ![Figure 1. Structure of a container](image) One of the purposes of such a structure is to organize the persistent store in some way, preventing the programmer from creating a soup of spaghetti pointers which would create difficulties for an automatic clustering system, as explained in reference 15. This structure also prevents data corruption, a problem that’s likely to occur in flat persistent stores, as there are no borders among objects or groups of objects and corruption can therefore be propagated from one group of objects to another one, and finally to the whole store. In Barbados, data corruption due to invalid pointers can only happen inside the boundaries of a container, thus limiting the potential damage. Encapsulation of containers is studied in references 9 and 17. There is another kind of relation among containers which can be found in Barbados: the CN Link relation, which happens between two containers. These links can be of three kinds (as shown in references 16 and 9), although the most interesting case for schema evolution is the classdef CN Link, which relates an instance in a container with its classdef in another container. This happens when something similar to the following code is executed: ```cpp cd(1); mkdir(test); Barbados> test: container cd(test); mkdir(program); Barbados> program: container cd(program); // Container /test/program class Counter { int count; public: int getCount(void) { return ++count; } void reset(void) { count = 0; } }; Barbados> class Counter {}; // Container /test/data mkdir(data); Barbados> data : directory cd(data); /test/data/Counter c; // C-N Swizzling relation "data -> program" // between the two containers Barbados> c : Counter c.reset(); for(int n=0; n<10; ++n) cout << c.getCount() << ' '; barbados>1 2 3 4 5 6 7 8 9 10 ``` As a result of all of these instructions, we have two related containers as presented in figure 2. Technical concepts appearing in this figure, such as 'namedobj' and so on are more deeply explained in references 10 and 16. As can be seen, the code of the application is created in the program container (container_id = 753), while the data itself (Counter *c, i.e., the object of class counter) is created in the data container The table of CN Swizzling (Container-Name Swizzling), which is deeply studied in reference 18, for container data (please refer to figure 2) now stores an entry (e.g.: \{container id = 753, name = ‘Counter’, TypeClass\} which is resolved at load time to an ordinary C++ pointer to the class definition object -\textit{classdef}- in another container), needed because the object pointed by ‘c’ has its class definition in container code (or 753). The entry in the CN–Swizzling table of the container data (or 68) also stores an \textit{AbbreviatedClassdef} such as the one below this paragraph (partially represented in the figure). An \textit{AbbreviatedClassdef} is just a short description of \textit{classdef}, this way a container referring to a class in another container has a degree of independence, instead of totally depend on that another container. The influence of abbreviated class definitions in schema evolution is explained in next sections. \textbf{Figure 2. Abbreviated classdef relation with foreign class definitions.} The C-N Swizzling mechanism is a powerful and useful one. However, its main purpose is to allow linking containers holding data of an application with containers holding the libraries of functions and classes of the application, instead of using it as a general communication mechanism. In the latter case, any containers could reference any other in the persistent store, and the whole PS would be loaded and unloaded from memory each time Barbados was executed, which is obviously sub-optimal. Also, the introduction of ‘containers’ might seem contrary to the ultimate goal of persistence, which is to minimise the effort of explicitly moving data between the persistent store and main memory. We acknowledge that, as a consequence of this design, Barbados doesn’t completely comply to the principles of orthogonal persistence, as defined by Atkinson and Morrison, in reference 2. But despite the container-related operations happen at such a coarse-grained level, we think it’s nevertheless consistent with the goals of persistence, because of the reasons explained in references 17 and 9. As in other fields (as can be studied in reference 19), this lack of orthogonality can be justified on the basis that it achieves other desirable features. That’s the case in Barbados, where better efficiency, robustness (which allows us to support a type-unsafe language such as C++, described in reference 20) and data protection are the aims of the underlying container-based structure. That’s also the motivation for the container-based structure of the Grasshopper persistent operating system (described in reference 21). But Barbados differs from Grasshopper in that Barbados addresses issues of fine-grained object management, whereas Grasshopper leaves that up to application developers, as it is an object oriented operating system. Something similar happens with PerDis, which is described in reference 13, in which a library that must be compiled with the persistent project is the interface to the PS, giving therefore a much more low-level support than Barbados gives. **Schema Evolution in Barbados** Schema evolution is one of the most important problems still to be solved in the persistence research field. One of the most important references for schema evolution management is O₂, a system described in reference 22, which has also inspired a quite complete schema evolution mechanism in Pjama, presented in references 23 and 11. One of O₂’s strong points is the use of conversion methods, also present in Pjama, and in Barbados, in the form of conversion functions, that give complete freedom to the user in order to do conversions of all kinds between two versions of a given class. Other systems don’t seem to have achieved the same degree of completeness as in O₂. For example, Orion, presented in reference 24, doesn't support conversion functions, while JSpin doesn't still have any implementation of schema evolution (though the JSpin's authors have presented a theoretical study, presented in reference 25). Many other researchers have studied the effects of Schema Evolution in their systems, such as for example Napier, as studied in reference 26, or, more recently, Oberon-D, deeply commented in reference 27. In this section we describe the schema evolution mechanism that has been added to Barbados. The design of schema evolution support in Barbados has been aimed at providing a “semi-automatic” mechanism: i.e. as automatic as appropriate. We do not provide sophisticated schema-evolution capabilities, because probably no schema-evolution mechanism will be completely satisfactory nor will include many of the transformations that will be required, as discussed in references 3 and 27. Another aim was to maximize availability of the system. Therefore, our design premises were clearly biased towards a mechanism which wouldn’t require bringing the system off-line, as shown in references 7 and 14. Finally, we wanted to take advantage of the container-based structure of Barbados compared to flat storages of pure-orthogonal persistent systems. When considering schema evolution in Barbados we have to take into account that Barbados allows the situation of a container \( A \) containing object instances of class \( X \), such that \( X \)’s complete class definition can be stored either in the same container \( A \) or in a different container \( B \). Thus, we have to consider evolving instances when the container uses a class defined in another container, and a class that changes inside the same container as the data, due to the division of the persistent store presented in reference 10 and extended in reference 9. **Detecting schema evolution among containers** As we discussed before, the situation of classes and objects being in different containers is going to be a very common situation. In fact, this is the most interesting possibility, and it is the basis of the mixed (eager/lazy) approach we follow, so it is explained in the following sections. It’s clear that for solving this problem, a pure eager approach is not appropriate in this case, since as it would require bringing the entire system off-line during the process. Thus, our choice must be some kind of lazy approach. In this respect, it must be pointed out that a requirement of schema evolution is that the system always knows the type of any piece of data we have. That is not a problem when using an eager approach, as class modification involves immediate adaptation of all objects of this class to the new definition. Therefore, we would only need to store the last definition of each class. But in a lazy approach, as was intended in Barbados, we would need to keep all possible versions of a class that might have instances in the persistent storage. The concept of a container naturally suggests the following solution to this problem: keep, in a container, a copy of every class of which instances are present in this container. That is the solution adopted in Barbados. But the copy of a class stored in a container is not a copy of the complete class definition, instead it is what we could call a “shallow-copy”, that is the names, types and offsets of each data member of the class. These pieces of information are called \textit{AbbreviatedClassdef’s} in Barbados, which are presented in reference 16, and are stored in the CN table, as previously explained. That replication is automatically and transparently managed by the system. Programmers are not aware of this, they only know that a given container has a certain degree of independence despite of holding instances of “foreign” classes. Knowing the type of any piece of data is only the first part of the problem, since at some point we need to determine if this type is out-of-date, and then take an appropriate action. As we have adopted a lazy approach, this checking will only be done when needed. The commonly adopted solution in orthogonal persistent systems is to perform this check when an object is loaded, as their flat-structure involves a per-object management basis when considering schema evolution. But as Barbados follows a container-based architecture, there is a more attractive option: checking all objects of a container at the point when the container is opened. When a container is loaded, all the related containers (i.e., those holding FGO’s needed by the first one) are loaded as well, but in read-only mode by default. That includes the containers which store the complete class definitions of the foreign classes. Then, the \textit{AbbreviatedClassdef’s} in each \textit{classdef CN Link} are compared with the corresponding complete class definition. If they are different, the evolution process has to be automatically triggered by the system. <table> <thead> <tr> <th>Value</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>SEConvert</td> <td>Converts all objects in the new container to comply with the new class definition. Quiet mode.</td> </tr> <tr> <td>SESplit</td> <td>Creates the old class in the new container, separating these two container, erasing the CN relation.</td> </tr> <tr> <td>SEFail</td> <td>Loading simply fails. The return value of error is E_SCHEVOL.</td> </tr> <tr> <td>SEAsk</td> <td>The user is asked about the way to take (the answer is again one of these possibilities, excepting ask, of course).</td> </tr> </tbody> </table> Table 1. Available possibilities for the SEvolution parameter in OpenContainer() A certain degree of control of the evolution process is available through an argument to the OpenContainer() API function. It actually receives three arguments: the container_id of the container to be loaded, the mode (readonly, readwrite) in which it is going to be opened, and a third argument, the one related to schema evolution. The possible values for this argument can be found in table 1. OpenContainer() will be called either interactively or programmatically (for example, through a function which opens a container). The argument used for interactive operations, should be SEAsk; so for example, the cd() command makes the call to OpenContainer() with SEAsk, in order to make Barbados ask the user what action to take in the case an out-of-sync relation is found. However, SEConvert applies a default conversion without asking the user any questions, provided any class evolution is needed. In the latter case, a conversion function is used. This conversion function can be a default one, generated by the system, or an available, compatible one –if any transformation has been done previously over that class-. This is useful in the case when containers are being accessed programmatically instead of interactively. SESplit means that, if evolution is needed, then the CN link between the two containers is deleted. The CN link exists, as has been seen, as an entry in the CN-Swizziling table of a container holding objects, pointing to a container holding their class definition (as shown in figure 2). If this option is selected, then the system creates a copy of the “foreign” class, in the same container the objects are (the data container or #68, in the figure). This way, the container holding the objects doesn't depend anymore of the container with the class definition (the code container or #753). In order to know how to create the class needed, the information stored in the CN-Swizzling table entry for that class is used, which holds the AbbreviatedClassdef of the class, before evolution. SEFail means that Barbados will abort any opening if evolution is needed, and an appropriate error will be returned. If the affected class hasn’t been modified, but deleted, then the only possible actions would be to ‘fail’, or to ‘split’. In Barbados we have decided to use conversion functions, strongly inspired by the conversion methods which can be found in PJama, which are explained in references 23 and 11, O2, commented in reference 22, and other OODBMS, compiled in reference 28. Conversion functions are explained in detail below; they are simply an opportunity for the programmer to customize the conversion of the affected instances. Here is an example session with Barbados: ```bash cd(/); mkdir(carshop); Barbados> carshop : directory cd(Carshop); mkdir(data); Barbados> data : directory; mkdir(bin); Barbados> bin : directory; cd(bin); #define MAX 250 class car { private: int year; bool emission_filter; char plate[MAX]; public: int price; car(char * plate, int age, bool emission_filter = true); bool hasEmissionFilter(void) const { return emission_filter; } int getAge(void) { return year; } char * getPlate(void) { return plate; } void Print(ostream &o); ``` In this example, there is a classdef CN Link between container /carshop/data and /carshop/bin, that is due to the fact that a number of instances of class car, which is defined in the bin container, are placed in the data container. The instances of class car are going to be created in this container, pointed by lcars, for the whole carshop application (although here only a limited view is shown). The mechanism of schema evolution is going to be explained using this tiny example about a second-hand car shop. Suppose a new law is passed declaring that all cars of more than 7 years without internal emission filters will be prohibited in three years. The owner of the car shop decides he is going to get rid of all cars without emission filters, so the emission_filter field is no longer required. The way to get rid of the cars is to assign them a very high discount (all cars without emission filters must have their price divided by two, and cars of more than 7 years and without emission filter must have they price divided by three.). In order to implement this discounts, the carshop system will now have two ‘price’ fields. The actualprice, which is private and depends on the value of the car, and discountprice, which is revised year by year and it is subject to revision due to many offers. Finally, both prices must be of the float type instead of int, and as explained, the emission_filter field must be removed. The programmer enters in the /carshop/bin and recompiles the cars class this way: cd(/carshop/bin); edit(car); class car { private: float actualprice; int year; char plate[MAX]; public: float discountprice; car(char * plate, int age); int getAge(void) { return year; } char *getPlate(void) { return plate; } float getActualPrice(void) { return price; } void Print(ostream &o); }; Barbados> class car {} car::car(char * plate, int age, float price) { strcpy(this->plate, plate); year = age; actualprice = price; } Barbados> car::car: function (ptr to char, int) returning void; void car::Print(ostream &o) { o << getPlate() << ",", years: "<< getAge() "", price: "<< (getActualPrice() - discountprice) << endl; } Barbados> car::Print: function (reference to ostream) returning void; While recompiling the class, Barbados finds out that another, old class, exists, so it fires the schema evolution mechanism. The old classdef is preserved, and the container is run over by the system looking for instances of class car. As no instances are found in this container, no action is taken. The conversion process is explained below in detail. Then, the user goes to the `/carshop/data` directory, which stores the list of cars. cd(/carshop/data); Barbados> Class counter doesn’t match previous definition: <Convert, <Split, <Fail: C ```c void convertInstance(car ** oldcar, car * newcar) { newcar->year = oldcar->year; strcpy(newcar->plate, oldcar->plate); Discountprice = 0; // newcar->actualprice = 0; newcar->actualprice = oldcar->price; if (!(oldcar->emission_filter)) if (newcar->year < 1995) newcar->discountprice = (oldcar->price * 2)/ 3; else newcar->discountprice = oldcar->price / 2; } ``` Barbados> Conversion finished. A default conversion function is presented to the user because, after loading the container, the system finds out that the CN relation to the classdef in the bin container doesn’t match the abbreviated classdef (remember it had an emission_filter data member). The lines automatically created in this conversion function (in inverse video, above) map the compatible data members of the old class with the new one. The user is able to modify this conversion function in any possible way (for example, the user comments out the fourth line, because it doesn’t serve to his purposes, while some others are added). This conversion function will be stored for future uses in the container holding the class definition, the /carshop/bin container. Once the user completes typing in the conversion function, the container is searched for all instances of the affected class (‘car’, in this case), and the evolution process finishes. This example shows the main advantage of this approach: containers are converted as they are loaded in memory. Although conversion inside each container is of the eager type, we wait until the container is loaded in memory before converting it. That’s why this is an hybrid eager/lazy approach. We admit that the system is off-line for the user for a while, while the container is being converted. However, a container is a very limited unit compared with the whole PS, so we think that this period will be very short. Conversion functions are explained in detail in a later section. **Detecting schema evolution inside the container of the modified class** As previously explained, in Barbados the user is faced with a directory hierarchy, thus when defining a class, he first decides the directory and therefore the container where this class definition would reside. Barbados permits to change a class simply by retyping or editing the class and recompiling it. While compiling, the system detects if a class definition already exists with the same name in the same directory. If so, Barbados will fire the schema evolution process as explained in next section. So, therefore, in contrast to some other systems, for example, the ones described in references 23 and 28, Barbados doesn’t offer a special language in order to specify changes in classes. We think that the addition of a new, auxiliary, language would be more confusing than this simple solution that appears to be more natural. The process would be similar to the one in the previous section, but note that we don’t have the “split” option available here, as everything happens inside the same container. Therefore, the basis of the schema evolution mechanism *in this case* is the eager transformation, in contrast to the *semi-lazy* approach in the last section in which containers are converted as they are loaded in memory. Although a container is not limited in size, except for the available amount of virtual memory, typically it will be in the order of kilobytes or megabytes. The schema evolution mechanism will only be applied to the containers loaded in memory, which means that the evolution process can be performed without bringing off-line the whole system, but, instead of this, a limited set of containers for a limited period of time. This was one of our main design goals. The developed mechanism seems to be quite efficient to us, and directly springs from the container-based structure of Barbados. In pure orthogonal systems, the flattened structure prevents us from applying eager schema evolution to a bounded portion of the persistent store. Using eager evolution in these systems, implies applying it to the whole store, which also implies putting the store off-line probably for a considerable amount of time. **Converting instances inside containers** Once the system has found out that a given container must have all instances of a given class converted, then the process to follow is always the same. A preliminary step consists of upgrading the container temporarily to read/write access if it is in read-only access mode. If that’s not possible –for example, another process is using that container- , then the Schema Evolution mechanism fails. Once the conversion starts, a lock is established over the container. This means that no other process can access it until the conversion process is finished. The process is based on the already presented conversion functions and an algorithm which finds all affected instances. Note that the instances of subclasses of affected classes must be converted as well, so an important step carried out before beginning this algorithm consists of defining the affected classes, namely the class detected to have changed, all its subclasses, and all classes containing objects of the affected classes as members. Then, the system tries to locate the first affected instance. If that fails, then the conversion process finishes. If that instance is found, then the system searches for an appropriate conversion function in the */.conv_functions* directory of the container which holds the class definition (not the one which holds the instances). The type information stored in the formal arguments of the conversion functions are enough to store all functions related to a given container using overloading and to find a conversion function suitable for a given class. If a conversion function is not available, an appropriate template for a conversion function is created (as was shown in figure 2) and prompted to the user. This preliminary template is built using the default primitive conversions, which are shown in table 2. Anyway, after this step, a conversion function is available in order to apply conversion. <table> <thead> <tr> <th></th> <th>int</th> <th>float</th> <th>double</th> <th>char</th> <th>char *</th> </tr> </thead> <tbody> <tr> <td>int</td> <td></td> <td>assign</td> <td>assign</td> <td>atoi()</td> <td></td> </tr> <tr> <td>float</td> <td>assign</td> <td></td> <td>assign</td> <td>atof()</td> <td></td> </tr> <tr> <td>double</td> <td>assign</td> <td>assign</td> <td></td> <td>atod()</td> <td></td> </tr> <tr> <td>char</td> <td>cast</td> <td>cast</td> <td>cast</td> <td></td> <td></td> </tr> <tr> <td>char *</td> <td>itoa()</td> <td>fcvt()</td> <td>fcvt()</td> <td>assign</td> <td>strcpy()</td> </tr> </tbody> </table> Table 2. Table of automatic primitive conversions. Conversion functions use a special signature for formal arguments, similar to how it was done in the evolution mechanism for PJama, which is described in references 23 and 11. There are two formal arguments: an instance of the old class and an instance of the new class. Note that they don’t pertain to the same classdef, although they are two versions of the same class. If the class definition is in another container, then a classdef (the “old” one) must be built from the information in the abbreviated classdef entries of the CN swizzling table. This means that also a namedobj is created as if exactly the class had been created here by the user. If the class definition is in the same container, then we already have an old classdef and a new classdef, as the recompilation of a classdef doesn’t eliminate the old one. In both cases, the conversion algorithm is in charge of deleting the old classdef when it is finished. In order to syntactically distinguish between the “old” and the “new” classdef, a special suffix (“_$old”, not expected to be used normally) is added to the name stored in the namedobj representing the old class (a similar technique is used in PJama, as shown in reference 23). If the old class has data members which are pointers to itself, then they must be also modified in order to make them point to the old class (i.e., the one with the suffix “__$old” in its name). In this way, the conversion function can refer naturally to it as “classname__$old”, without needing to modify the compiler in any way. Conversion functions are automatically marked as friend functions of the classes they are expected to cope with. This allows them to have access to all members of the affected class, without any restriction. Once the instance has been located, it is simply created in its new size, copied (the common fields), and finally converted (using the conversion function), without the intervention of the constructors and destructors, which could be dangerous, since in fact the object is not being eliminated, it is just being transformed—though the user is free to call new and delete in the conversion function. The use of constructors and destructors was discarded since, if important resources are freed in the destructor, for example, there may be a loss of critical information. On the other hand, the algorithm wouldn’t know what parameters to pass to a constructor of an object which doesn’t provide a default constructor, as explained in reference 20. The table of relocations is used in order to keep track of all objects that have been converted and had to be moved because the place in which they were was not large enough to fit them. In this table, the size of those objects, their old addresses and the new addresses are stored. Once the conversion is finished, the algorithm runs over the container in order to modify all references to these objects in the table of relocations. Obviously, the algorithm must take into account that C++ allows that references to the old object to fall between its old address and its size (i.e., any of its public members can also be referenced). Finally, this algorithm is safe in terms of reliability: the conversion process will be triggered again if the process is interrupted by an error or a system crash, as the table of CN Swizzling wouldn’t have been modified, and the need of conversion would be detected again. At the end of the process, the saving –an atomic operation- of the container is forced in order to assure that the whole conversion has been applied. This means that the schema evolution process is executed completely or otherwise doesn’t have effect at all. **Advanced conversions** For more advanced uses, which are not covered by the automatic algorithm, the `InstanceIterator` class is available, that provides users with a safe way to locate all instances in a given container. The user is therefore able to locate all instances in a given container and apply to them any possible change, or even delete some of them. As it can be seen in figure 3, the `InstanceIterator` class is provided to the user as a part of the conversion API. This is similar to the bulk conversion possibility of PJama, described in references 23 and 11, although the mechanism here is not so automatic. The `InstancesIterator` objects provide a safe way to locate all instances of a given class, but users must write their own function using those objects in order to apply any possible conversion they need, perhaps not covered by the automatic evolution process. This covers our objective of providing a minimum basis of automation, still giving, this way, to the user the possibility of control the conversion process. Conclusions In this article, Barbados, a persistent programming system based on C++, has been briefly presented, similar but different to the completely orthogonal ones. Its main characteristic is that it is based on containers, dividing the persistent store into “boxes” of objects, where these “boxes” are visible to the user through the abstraction of folders (directories) of a file system. We’ve concluded that containers are a useful abstraction which justifies the needed relax the orthogonal rules for persistent systems. Also, a design of the schema evolution mechanism for Barbados has been presented. Containers are found to be a successful abstraction for implementing in a very natural way a mixed approach between eager and lazy conversion of objects. Objects of modified classes are converted as the containers in which they live are loaded in memory, while the objects in the same container in which the class lives are converted eagerly. This mixed mechanism is convenient, and, to the authors knowledge, this kind of hybrid approach has not been used previously. Future work about schema evolution in Barbados is to design and implement the migration support mechanism (i.e., when objects are “moved” from a class to another one). Bibliography
{"Source-Url": "https://pdfs.semanticscholar.org/2bc1/5e98f48b00560db7b80f197fc3f095ccbe21.pdf", "len_cl100k_base": 8784, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 55867, "total-output-tokens": 11309, "length": "2e13", "weborganizer": {"__label__adult": 0.0003123283386230469, "__label__art_design": 0.00026035308837890625, "__label__crime_law": 0.0002560615539550781, "__label__education_jobs": 0.00036525726318359375, "__label__entertainment": 4.100799560546875e-05, "__label__fashion_beauty": 0.00011813640594482422, "__label__finance_business": 0.00012946128845214844, "__label__food_dining": 0.0002796649932861328, "__label__games": 0.00034499168395996094, "__label__hardware": 0.0005702972412109375, "__label__health": 0.00034689903259277344, "__label__history": 0.0001809597015380859, "__label__home_hobbies": 6.496906280517578e-05, "__label__industrial": 0.0002359151840209961, "__label__literature": 0.0001964569091796875, "__label__politics": 0.0001984834671020508, "__label__religion": 0.0003597736358642578, "__label__science_tech": 0.0051727294921875, "__label__social_life": 6.580352783203125e-05, "__label__software": 0.00396728515625, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00021028518676757812, "__label__transportation": 0.0003578662872314453, "__label__travel": 0.00017011165618896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46537, 0.01223]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46537, 0.64724]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46537, 0.90593]], "google_gemma-3-12b-it_contains_pii": [[0, 1846, false], [1846, 3926, null], [3926, 5868, null], [5868, 7951, null], [7951, 10049, null], [10049, 11504, null], [11504, 13207, null], [13207, 15517, null], [15517, 17646, null], [17646, 19665, null], [19665, 21897, null], [21897, 24204, null], [24204, 25987, null], [25987, 27418, null], [27418, 28854, null], [28854, 30527, null], [30527, 32595, null], [32595, 34627, null], [34627, 36607, null], [36607, 38804, null], [38804, 40126, null], [40126, 41377, null], [41377, 42713, null], [42713, 43987, null], [43987, 45429, null], [45429, 46537, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1846, true], [1846, 3926, null], [3926, 5868, null], [5868, 7951, null], [7951, 10049, null], [10049, 11504, null], [11504, 13207, null], [13207, 15517, null], [15517, 17646, null], [17646, 19665, null], [19665, 21897, null], [21897, 24204, null], [24204, 25987, null], [25987, 27418, null], [27418, 28854, null], [28854, 30527, null], [30527, 32595, null], [32595, 34627, null], [34627, 36607, null], [36607, 38804, null], [38804, 40126, null], [40126, 41377, null], [41377, 42713, null], [42713, 43987, null], [43987, 45429, null], [45429, 46537, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46537, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46537, null]], "pdf_page_numbers": [[0, 1846, 1], [1846, 3926, 2], [3926, 5868, 3], [5868, 7951, 4], [7951, 10049, 5], [10049, 11504, 6], [11504, 13207, 7], [13207, 15517, 8], [15517, 17646, 9], [17646, 19665, 10], [19665, 21897, 11], [21897, 24204, 12], [24204, 25987, 13], [25987, 27418, 14], [27418, 28854, 15], [28854, 30527, 16], [30527, 32595, 17], [32595, 34627, 18], [34627, 36607, 19], [36607, 38804, 20], [38804, 40126, 21], [40126, 41377, 22], [41377, 42713, 23], [42713, 43987, 24], [43987, 45429, 25], [45429, 46537, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46537, 0.04851]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
93fa662000356b23c997a023c49a414ed5ca6772
[REMOVED]
{"Source-Url": "https://fnoorian.github.io/gramEvol/inst/doc/ge-intro.pdf", "len_cl100k_base": 9640, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42692, "total-output-tokens": 11594, "length": "2e13", "weborganizer": {"__label__adult": 0.0003590583801269531, "__label__art_design": 0.0006537437438964844, "__label__crime_law": 0.0003807544708251953, "__label__education_jobs": 0.0015869140625, "__label__entertainment": 0.0001862049102783203, "__label__fashion_beauty": 0.00016999244689941406, "__label__finance_business": 0.00022268295288085935, "__label__food_dining": 0.0003185272216796875, "__label__games": 0.0006585121154785156, "__label__hardware": 0.0006322860717773438, "__label__health": 0.00039505958557128906, "__label__history": 0.0003659725189208984, "__label__home_hobbies": 0.0001266002655029297, "__label__industrial": 0.0004372596740722656, "__label__literature": 0.0009479522705078124, "__label__politics": 0.0003094673156738281, "__label__religion": 0.0006046295166015625, "__label__science_tech": 0.07659912109375, "__label__social_life": 0.00016510486602783203, "__label__software": 0.023590087890625, "__label__software_dev": 0.890625, "__label__sports_fitness": 0.0002617835998535156, "__label__transportation": 0.0004012584686279297, "__label__travel": 0.00019884109497070312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37104, 0.04643]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37104, 0.85999]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37104, 0.76574]], "google_gemma-3-12b-it_contains_pii": [[0, 2033, false], [2033, 5372, null], [5372, 8274, null], [8274, 10791, null], [10791, 12662, null], [12662, 14829, null], [14829, 16663, null], [16663, 19346, null], [19346, 21286, null], [21286, 23132, null], [23132, 23789, null], [23789, 25063, null], [25063, 26813, null], [26813, 29933, null], [29933, 32529, null], [32529, 33674, null], [33674, 36492, null], [36492, 37104, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2033, true], [2033, 5372, null], [5372, 8274, null], [8274, 10791, null], [10791, 12662, null], [12662, 14829, null], [14829, 16663, null], [16663, 19346, null], [19346, 21286, null], [21286, 23132, null], [23132, 23789, null], [23789, 25063, null], [25063, 26813, null], [26813, 29933, null], [29933, 32529, null], [32529, 33674, null], [33674, 36492, null], [36492, 37104, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37104, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37104, null]], "pdf_page_numbers": [[0, 2033, 1], [2033, 5372, 2], [5372, 8274, 3], [8274, 10791, 4], [10791, 12662, 5], [12662, 14829, 6], [14829, 16663, 7], [16663, 19346, 8], [19346, 21286, 9], [21286, 23132, 10], [23132, 23789, 11], [23789, 25063, 12], [25063, 26813, 13], [26813, 29933, 14], [29933, 32529, 15], [32529, 33674, 16], [33674, 36492, 17], [36492, 37104, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37104, 0.05401]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
33ff467fdb9488c0fb99b180b5f7170d210d6ac7
N-1158-1-ARPA May 1979 DESIGN OF A RULE-ORIENTED SYSTEM FOR IMPLEMENTING EXPERTISE D. A. Waterman, R. H. Anderson, Frederick Hayes-Roth, Philip Klahr, Gary Martins, Stanley J. Rosenschein A Rand Note prepared for the DEFENSE ADVANCED RESEARCH PROJECTS AGENCY The research described in this report was sponsored by the Defense Advanced Research Projects Agency under Contract No. MDA903-78-C-0029. The Rand Publications Series: The Report is the principal publication documenting and transmitting Rand's major research findings and final research results. The Rand Note reports other outputs of sponsored research for general distribution. Publications of The Rand Corporation do not necessarily reflect the opinions or policies of the sponsors of Rand research. N-1158-1-ARPA May 1979 DESIGN OF A RULE-ORIENTED SYSTEM FOR IMPLEMENTING EXPERTISE D. A. Waterman, R. H. Anderson, Frederick Hayes-Roth, Philip Klahr, Gary Martins, Stanley J. Rosenschein A Rand Note prepared for the DEFENSE ADVANCED RESEARCH PROJECTS AGENCY Rand SANTA MONICA, CA. 90406 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED PREFACE This Note describes the preliminary design of a Rule-Oriented System for Implementing Expertise (ROSIE). This system is intended as a tool for model builders seeking to apply expert knowledge to the analysis of problems and to the evaluation of solutions in complex domains, especially domains for which useful analytic models are unavailable. This preliminary design—the result of a six-month design exercise—formed the basis of a proposal for implementation of the software system submitted to the Information Processing Techniques Office of the Defense Advanced Research Projects Agency. The Note is being distributed to promote discussion and exchange of views with colleagues interested in rule-directed systems for heuristic modeling. It is intended for a technical audience; basic knowledge of the architecture of rule-based systems is assumed. SUMMARY The preliminary design has been completed of a modeling system that will enable experts and end-users alike to participate directly in the creation of interesting applications systems. ROSIE has been designed to be a flexible system capable of processing large quantities of information efficiently and effectively. In addition, it is able to facilitate interaction with the external world and is implementable within a short time period. ROSIE is flexible and friendly, i.e., easy to modify, use and understand. This is accomplished by making ROSIE models rule-based and by providing the user with a support package that facilitates his use of the system. The rule syntax of ROSIE is similar to RITA: IF-THEN rules in an English-like framework. However, rule semantics have been expanded to facilitate iteration through a data set and to provide an abstraction and aggregation hierarchy mechanism. We have also introduced an event-driven monitor capable of testing when expressions become true. This permits the user to notice when things are changing and simplifies implementing alerts or other kinds of change detecting processes. To handle the problem of processing large amounts of information we have modularized the rule and data elements so individual modules can be accessed and executed independently. This provides a means for maintaining only the currently active and perhaps relevant modules in core at any one time. The mechanism for achieving modularity relies on the concepts of partitioning and activation. The user partitions his rules and data into separate sets based on his expectations regarding their interdependencies. Rule and data sets are activated, i.e., permitted to interact to cause rules to fire, only when deemed relevant by the user or the ROSIE monitor. The support package in ROSIE includes many features for assisting the user, all built around the notion that rules are simply another type of data element that may be accessed and manipulated by rules. Editing facilities are rule-based and thus may be extended or modified by the user. The user may construct auxiliary rule sets that assist him in determining rule correctness by examining the main rule set, looking for important similarities or differences in rules. A sophisticated explanation facility is included that traces the operation of the system at various levels, providing a way to justify system inferences and debug faulty rule sets. Reasoning in the presence of uncertainty is handled by permitting the user to assign weights or "certainty factors" to rules and data. The user can then specify a certainty range, and only rules and data with certainty factors in that range will be used in the calculation. CONTENTS PREFACE.............................................................................................................. iii SUMMARY........................................................................................................... v Section I. INTRODUCTION.............................................................................................. 1 II. SYSTEM DESIGN -- AN OVERVIEW......................................................... 7 Large Rule/Data Sets ..................................................................................... 7 Friendly Support Environment ..................................................................... 8 Interaction With the External World............................................................. 10 Modifiability.................................................................................................. 12 III. DATA SPECIFICATION.............................................................................. 14 Element Forms.............................................................................................. 14 Element Hierarchies....................................................................................... 19 IV. RULE SPECIFICATION........................................................................... 25 Rule Forms..................................................................................................... 26 Instance Sets................................................................................................ 30 Case Phrase.................................................................................................. 31 Variables....................................................................................................... 32 Datasets/Activation....................................................................................... 32 V. USER SUPPORT ENVIRONMENT......................................................... 37 The User's Top-level View of the System.................................................... 37 Editing Functions.......................................................................................... 39 Model Analysis............................................................................................. 42 REFERENCES.................................................................................................... 53 I. INTRODUCTION This Note describes the preliminary design of a Rule-Oriented System for Implementing Expertise (ROSIE). This system will serve as a tool for model builders seeking to apply expert knowledge to the analysis of problems and the evaluation of solutions in complex domains, especially domains for which useful analytic models are unavailable. Its basic simplicity, together with powerful user-support features, will encourage enterprising users to take the lead in developing innovative models to serve their own mission areas. Computer systems that faithfully incorporate human judgmental expertise offer substantial advantages to the military, especially if they can be built with reasonable effort. They promise to make such expertise widely sharable, helping to relieve the demand for highly trained and experienced operational personnel. Additionally, a single system may incorporate the expertise of many contributors, resulting in a net improvement in the overall mission performance of the system. In most cases, ROSIE programs are expected to serve as aids to—not replacements for—human decisionmakers, whose performance they will sharpen and stabilize. Potential applications for systems of this kind abound in the civilian and military worlds. Military applications are anticipated in areas such as: tactical: ops planning, experiments, gaming personnel: training, testing, practice mil ops: situation analysis, plan evaluation logistics: basing, staging maintenance: cycle planning, policy evaluation Models built within the ROSIE system may constitute simulations in the application domain. Control in these models is data-directed (Hayes-Roth, Waterman, & Lenat, 1978; Waterman, 1978a, 1978b); that is, actions are specified by sets of rules. For readers not familiar with rule-based systems and prior Rand R&D in this area, the following background information provides some additional context. A rule-based system can be thought of as having three components: a set of rules of the form IF <conditions> THEN <actions>; a data base against which the rule conditions are tested, and which is altered by the execution of rules' actions; and a monitor program that contains logic regarding the order in which rules are to applied, what to do in case more than one rule applies (i.e., has true conditions) at one time, and so forth. The rule-based system may be situated between the user and other external systems as shown below. That is, the rule-based system is capable of interacting with the user (e.g., to obtain advice, to explain its behavior upon request) and also communicating with one or more external information systems (which might be contained within the same host computer, or accessed via data networks) to obtain needed information in the course of its calculations. Within this general system architecture, the rule-based system might play several different roles: - A decision aid or planning aid for the user, containing a number of rules ("heuristics") that guide its deliberations in generating plans or proposed decisions. In this role, the logic within the rules and data of the rule-based system is of paramount interest, with the rule-based system possibly calling upon external information systems for needed data; - A flexible interface to external information systems. In this case, the user's primary interest is in the external system, but he prefers to interact with that system through a tailored (rule-based) interface capable of mapping user requests into an interactive dialog that extracts needed information from the external system. Here the rule-based system often acts as a "surrogate user", dealing with the external system as if it were a human user of that system; in this manner, no changes are needed in the external system in order to obtain the advantages of this tailored interface. ROSIIE is a system that allows rules + data + monitors to be developed that will become a total "rule-based system" for any of the above uses. Its design is quite heavily influenced by earlier experience with a similar, but much simpler, rule-based system developed by Rand called RITA (Andersen & Gillogly, 1976; Anderson et al., 1977). Readers desiring additional background information on the design philosophy involved in ROSIE's precursors and examples of the uses of rule-based systems are urged to consult the above references. The ROSIE design exploits and integrates many current ideas in artificial intelligence research. The use of a rule-based language builds on previous work on MYCIN (Shortliffe, 1976) and other work in pattern-directed inference system design (Waterman & Hayes-Roth, 1978b). The event- or change-driven rule-invocation strategies are based on the use of demons in PLANNER and ARS (Hewitt, 1971, 1972; Stallman & Sussman, 1976) and similar schemes for speech understanding (Hayes-Roth & Mostow, 1975). We have borrowed the idea of a hierarchical data structure capable of supporting abstraction and inheritance from the work on units and frames (Minsky, 1975; Martin et al., 1977; Lenat, 1976, 1977; Lenat & Harris, 1978; Winograd, 1975; Bobrow & Winograd, 1977; Charniak, 1975; Havens, 1978). Many of the ideas related to the use of rules as data and the inclusion of elaborate user support features were inspired by INTERLISP (Teitelman, 1974). We intend to retain the positive user-oriented features of these languages while incorporating new features that simplify the handling of rich and complex domain descriptions. Major features include: - hierarchic structures on data elements and rules, to support abstraction in the models - user selection of existence-driven or event-driven rule-invocation strategies - user control of rule iteration - user control of rule and data activation, including a "rule-subroutining" capability - user support tools In the design of the ROSIE system, our primary aim has been to support the creation of realistic models. We know from experience with RITA and other rule-based systems, that realistic modeling implies fairly large sets of model elements: rules and data elements. We also know that to be useful to end-users (i.e., people with expertise in some significant problem domain, but lacking expertise in conventional computer programming), a powerful system must be easy to learn and use (Waterman, 1977; Waterman & Jenkins, 1977). Hence, the main requirements on the ROSIE design are: - efficient and effective handling of large rule and data sets for realistic modeling - a "friendly" user environment that facilitates both system building and use - system flexibility and modifiability to allow exploration of implementation alternatives - implementation within a relatively short time period to support near-term applications Our choice of an implementation environment for ROSIE was largely determined by these criteria. Embedding the prototype system in INTERLISP will permit fast implementation and allow a flexible approach to monitor strategies and other key system decisions. Running the ROSIE system on a PDP-10 class computer will give users the speed and memory capacity needed for building large models. The following sections describe in greater detail the features of the ROSIE system. Section II discusses the ROSIE design requirements, relating them to the current design. Sections III and IV describe data and rule specifications and Section V concludes with a discussion of the user support environment. II. SYSTEM DESIGN -- AN OVERVIEW LARGE RULE/DATA SETS Vast amounts of information are needed to reach decisions in complex application areas. To handle this problem we have modularized the rules and elements so individual modules can be accessed and executed independently. This provides a means for maintaining only the currently active and perhaps relevant modules in core at any one time. The mechanism for achieving modularity relies on the concepts of partitioning, activation, and abstraction. The user partitions his rules and data into separate sets based on his expectations regarding their interdependencies. Rule and data sets are activated, i.e., permitted to interact to cause rules to fire, only when deemed relevant by the user or the ROSIE monitor. The user is able to handle many different kinds of rules at different levels of generality through abstraction, i.e., organizing the elements so that the "INSTANCE" relations between very general and very specific elements are made explicit. General rules apply to categories of data types called concepts. We have used data abstraction in order to make it easy to write rules that apply to all instances of general concepts. This works by allowing elements that represent low-level or very specific concepts (e.g., a carrier) to inherit attributes specified by higher-level or more general concepts (e.g., naval platform) of which they are instances. Thus, a rule that checks to see if "carriers" have some attribute x will be satisfied if either the proper value of the attribute is associated with "carrier" or is associated with a more general concept of "carrier" such as "naval platform." Similarly, aggregation is used to permit the user to access collections of elements using simple rules. Here the "member of" relation between aggregated elements and their constituent parts is made explicit. Questions about being a member of something are answered by finding the closure of all sets and subsets that are members of the element in question. Finally, we will achieve a significant efficiency by allowing the rules to fire in response to events or changes in the data base. This use of an event-driven monitor also simplifies the rules, permitting the user to create rule sets that act as large collections of demons acting independently of one another. In addition, we envisage permitting the user to formulate different control rules, which we call monitor programs, that would be specially adapted to the efficient use of large sets of rules or searches of large data bases. **FRIENDLY SUPPORT ENVIRONMENT** A primary goal is to make the system familiar and friendly. By familiar we mean non-radical, extending ideas already developed in other systems. For example, we have borrowed the idea of an English-like syntax from RITA and MYCIN, the concept of data abstraction hierarchies from a number of AI programs (Minsky, 1975; Lenat, 1977; McCalla, 1978), and the idea of recognition nets to speed up the rule matching from the ACORN work (Hayes-Roth & Mostow, 1975). We understand these ideas quite well because they have been implemented either at Rand or elsewhere several times before. By friendly we mean a system that is easy to use and understand. We accomplish this in two ways—by designing the system around a simple rule syntax and by providing the user with a support package that facilitates his use of the system. The rule syntax of ROSIE is quite similar to RITA: IF-THEN rules in an English-like framework. However, most of the awkwardness of RITA programming is gone. For example, in RITA it is difficult to write rules that look for a certain kind of pattern in the data and then perform a particular action to all instances of the data elements matching that pattern. It is difficult to write single rules that apply to classes of data elements. Also, at present the user often needs to specify the program state as a condition for rule firing and a change of state as an action in order to obtain sequential rule firings or prevent a single rule from firing repeatedly. To avoid these problems we have expanded rule semantics to facilitate iteration through a data set and have provided the abstraction and aggregation hierarchy mechanism. We have also introduced an event-driven monitor to allow expressions to be tested for their becoming true. Thus rules can detect if an expression is currently true but was not true on the last tested cycle and cause appropriate action to be taken. This permits the user to notice when things are changing and simplifies implementing alerts or other kinds of change detecting processes. The support package includes many features for assisting the user, all built around the notion that rules are simply another type of data element that may be accessed and manipulated by rules. Editing facilities in ROSIE are rule-based and thus may be extended or modified by the user. The user may construct auxiliary rule sets that assist him in determining rule correctness by examining the main rule set, looking for important similarities or differences in rules. A sophisticated explanation facility is included that traces the operation of the system at various levels, providing a way to justify system inferences and debug faulty rule sets. Reasoning in the presence of uncertainty is handled by permitting the user to assign weights or "certainty factors" to rules and data. The user can then specify a certainty range, and only rules and data with certainty factors in that range will be used in the calculation. INTERACTION WITH THE EXTERNAL WORLD One of the distinctive strengths of the RITA system, when compared to other existing production-rule programming systems, is the simplicity and power of its facilities for interaction with the external world. The ability of user models to affect the external world, and to be affected by external events, is important for many applications; it is indispensable for command and control decision support models which require the monitoring and interpretation of real-world situations. An especially important application of a real-world interface is the driving of graphic and alphanumeric display systems by the user’s model. The experience of many RITA users suggests that RITA’s relation to the external world is the appropriate model to pursue for the new system. This interaction is mediated through the exchange of character strings with the host operating system. The right-hand sides of rules influence the behavior of the host computing system just as would a user sitting at a terminal—by composing commands to the host system, and receiving messages from the host operating system in reply. For example: ``` IF users OF system IS NOT KNOWN THEN SEND "systat" TO tenex AND RECEIVE {ANYTHING FOLLOWED BY sys-prompt} AS users OF system; WHEN THERE IS AN active-force [f] WHOSE astab IS NOT CURRENT THEN FOR ALL CASES SEND astab-request OF [f] TO ladder AND RECEIVE {ANYTHING FOLLOWED BY end-of-report} AS astab OF [f]; ``` In this way, the user’s rules can exercise all of the host system’s capabilities, including network access (where available) to other systems. Such an approach, which takes advantage of all existing operating system facilities, pays off in two ways: it builds upon the user's knowledge of the host system, and it simplifies the implementation of ROSIE. The exchange of character strings with the host system requires the existence of string analysis and composition mechanisms within ROSIE itself. These will resemble the highly successful RITA "pattern" and "concatenate" features. MODIFIABILITY It is important to make the system modifiable by the user to reflect his growing insight and expertise. Thus the monitor programs, the code controlling the way the rules make contact with the data base, are themselves formulated as rule-based programs. Two or three alternative rule-based monitors will be made available, although the sophisticated user will have the option of modifying or rewriting them himself. In this way we make almost all of the facilities of the system accessible to the user. Not only have we carried over the idea from LISP that "data equals program," but we have carried over the good ideas from INTERLISP that the system facilities themselves are written in the same formalism as the applications. Thus increasing the user's expertise in the applications program provides a capability for simultaneously increasing his expertise in the overall system. Fig. 1 — Naval Force Configuration A typical ROSIE application is threat assessment, i.e., representing friendly and enemy forces in a manner that facilitates the recognition of an immediate or potential threat to one side or the other. Figure 1 is a simple illustration of a naval force configuration that is amenable to threat analysis. The problem is to develop a data base representing this configuration and rules describing how to calculate threats. This example will be used throughout the paper to assist in describing the design and use of ROSIE. III. DATA SPECIFICATION The basic system components are called knowledge elements. Knowledge elements represent cohesive pieces of knowledge such as events or the status of material objects, and the collection of these elements is called the knowledge base (sometimes referred to as the "data base" in other rule-based systems). Two types of elements exist: concepts and objects. A concept is an element that describes a class of data elements, e.g., plane, ship, battle, war. Classes of events may be represented as concepts, e.g., "losing a battle," while particular events may be represented as objects, e.g., "losing battle 34." Concepts have information associated with them that is representative of the class in general. For example, the concept "plane" might have the following associated information: it flies through the air, is self-propelled, etc. An object, on the other hand, represents a specific "real world" entity. It is usually a particular instantiation of some concept. For example, the data base might contain the concept "carrier" and an object "Enterprise," representing a particular carrier. ELEMENT FORMS An element is composed of a name with any number of associated attribute-value pairs. The name is a string of text that references or "names" the element, while the attributes are characteristics of the element that can have associated values. The name-attribute-value triple can be represented either as statements or graphs. These are illustrated below. The <attribute1> of the <name> is <value1>. The <attribute2> of the <name> is <value2>. These representations can be used to describe the object "Enterprise" from Figure 1 as shown below. The course of the Enterprise is 315. The speed of the Enterprise is 20. A similar example for the concept whose name is "carrier" is shown below. The armament of a carrier is planes. The platform-type of a carrier is surface. We will later show how to link these two representations into a coherent structure called an element hierarchy (see Figure 2). It is also possible to associate values directly with names. The name has an implicit default attribute called "own value" (V). For example: The enemy force is F-23. The friendly force is F-37. Two types of attributes are permitted in ROSIE, user-defined attributes and system attributes. The user-defined attributes are arbitrary words representing relations the user would like to define between the element's name and value. The system attributes are reserved words with special meaning to the ROSIE monitor. The four most important system attributes are "OWN VALUE" (V), "EXISTENCE" (E), "INSTANCE" (I), and "MEMBER" (M). The "OWN VALUE" attribute (V) directly links a value to a name, thus providing a basic binding mechanism analogous to the assignment of values to identifiers in conventional programming languages. This attribute is untyped and can be set by a simple assignment statement, e.g., "SET enemy-force TO F-23" sets the V attribute of enemy-force to F-23. Evaluating an object consists of returning the value of its V attribute. The V attribute permits the construction of more compact, succinct element descriptions in many cases, e.g., using "the enemy force IS F-23" rather than "the current name OF the enemy force IS F-23." The "EXISTENCE" attribute (E) applies to objects and has a value representing the certainty that the object actually exists. For example, if a blip on a radar screen is interpreted as an enemy plane it may be important to include an estimate of certainty that the plane does exist, as shown below. \[ \text{plane-4} \\ \downarrow E \\ \downarrow .8 \] Here the certainty that plane-4 exists is estimated as .8. If no information about the value of the E attribute is present in the knowledge base, it is assumed to be 1 (completely certain). The "INSTANCE" attribute (I) is used to link concepts to other concepts or objects that are instances of or examples of the original concept. When the I attribute is used to form a name-attribute-value link the name is always a concept, the attribute is I, and the value is either an object or concept name. Examples are shown below. \[ \text{platform} \\ \downarrow I \\ \downarrow \text{carrier} \hspace{1cm} \text{carrier} \\ \downarrow I \\ \downarrow \text{Enterprise} \] The examples state that an instance of a platform is a carrier, and an instance of a carrier is the Enterprise. Since the value of an attribute can be an element name, the I link can be used to build complex nets, as discussed in the next section. The "MEMBER" attribute (M) is used to link elements to other elements that represent components of the original element. The M attribute always links concepts to other concepts, and objects to other objects (see below). ``` task force force-37 ↓ M ↓ M carrier Enterprise ``` The examples state that "a member of a task force is a carrier," and "a member of force-37 is the Enterprise." All system attributes have corresponding inverse attributes whose links are automatically defined when the original attribute is defined. Thus when the user states "an INSTANCE OF a platform IS a ship," or "ship IS an INSTANCE OF a platform" the following links are made between ship and platform: ``` platform ship ↓ I ↓ -I ship platform ``` indicating that the concept "platform" includes the special case "ship." ELEMENT HIERARCHIES There are two fundamental kinds of hierarchies that can be constructed in ROSIE, abstraction hierarchies and aggregation hierarchies. The abstraction hierarchy is defined by "INSTANCE" links between high-level concepts and lower-level concepts or objects. The links from higher concepts to lower concepts are traversed via the reserved attribute "INSTANCE" (I), while upward links are traversed via the reserved attribute "IS A" (-I). Hierarchies of this special kind can be created by type declarations as well as by direct manipulation of the reserved attributes. Examples of type declarations of the intended kind are: - EVERY ship IS a platform - EVERY OBJECT WHOSE armament IS planes IS a carrier - platforms INCLUDE surface craft AND aircraft - surface craft INCLUDE ships AND submarines - ships INCLUDE carriers, destroyers AND cruisers - carriers INCLUDE the Enterprise AND the Kittyhawk The abstraction hierarchy is useful for expressing permanent type-token relationships among the objects in the universe. This hierarchy is a network of elements connected by I links, e.g., $$I \quad I \quad I$$ $$e_1 \longrightarrow e_2 \longrightarrow e_3 \longrightarrow e_4.$$ Each element in the hierarchy inherits the user-defined attributes and values above it (going against the arrows) in the network. If the same attribute can be accessed more than once during upward traversal through the links, the lowest (closest) attribute-value pair is inherited. In Figure 2, the information that carriers carry planes and are surface vessels does not have to be stored repetitively with the objects "Enterprise" and "Kittyhawk." Instead, these objects "inherit" these values through upward traversal of the I-links. Thus a rule referring to a ship whose armament is planes would match both "Enterprise" and "Kittyhawk." This I-link inheritance can be suppressed by the addition of a "DON'T INHERIT" flag as an additional property of the attribute. Aggregation hierarchies can be built by connecting elements via the "MEMBER" or M link. They may be accessed by using "MEMBER" in the retrieval clause, e.g., "IF THERE IS a ship THAT IS A MEMBER OF a U.S. task force," or "IF THERE IS A MEMBER OF a U.S. task force WHOSE name IS Enterprise." When M-hierarchies and I-hierarchies intersect, the search for MEMBER will proceed appropriately through the I-hierarchy as well. There is no inheritance of attributes through the M-links. Since the user may himself define new attribute types and link them to elements as desired, he has the potential for creating arbitrarily complex networks in the data base. These networks are created by explicit manipulation of attributes and links, either by editing or by rule-directed manipulation of attributes. The only privileged monitor operation supported for these hierarchies is transitive traversal of these links during matching. ![Diagram showing the relationship between Platform, U.S. Fleet, Ship, U.S. Task Force, Carrier, F-23, Enterprise, F-37, I, M, and threatening force.] Figure 3. An Example of Abstraction and Aggregation Hierarchies Objects can only be members of objects and concepts can only be members of concepts, but both objects and concepts can be instances of concepts. Figure 3 illustrates the use of I, M and user-defined attributes in a data hierarchy that partially represents the configuration shown in Figure 1. System-defined attribute types are shown in upper case. Note that "threatening force" is a user-defined attribute name. The user (or system builder) defines his own attribute names and element names; the naming convention is completely arbitrary. Alternatively, "threatening force" could have been used as an element name. Concepts linked to concepts through member-of relations mean that every instance of the higher level concept contains an instance of the lower level concept, for example, ``` U.S. task force M ↓ carrier ``` means that every U.S. task force contains a carrier, but not that every carrier is a member of a U.S. task force. Links composed of member-instance pairs lead to "possible" or "could be" inferences, as shown below. ``` U.S. task force M ↓ carrier I ↓ nuclear carrier ``` Here we may infer that a nuclear carrier could be a member of a U.S. task force, but not that this is necessarily true. Whether or not we want to incorporate mechanisms to deal with inferences of this sort (and other similar ones) is still an open question. An attribute has other information associated with it besides the value to which it is pointing. It has a data TYPE that can be number, list, string, element-name, or boolean; an INHERITIBILITY flag that determines whether or not it will be inherited via the I-links, and a CERTAINTY factor describing how certain it is that the attribute of the element has the given value. There is a universal system concept (an implicit top node) that can have associated with it default attributes, attribute types, and values. All elements then inherit these properties. For example, if the system concept has the attribute LOCATION, with type LIST, then all elements in the system would have it. IV. RULE SPECIFICATION Rules are represented as conventional knowledge elements; hence they have attribute/value pairs associated with them and can be accessed and modified by other rules. The feature that distinguishes a rule element from a data element is the presence of a "condition" attribute representing the rule's left-hand and an "action" attribute representing the right-hand side. Shown below are other built-in attributes rules may have in addition to those the user may care to define. (Note that this list is not exhaustive.) Name: name of the rule (must be unique) Condition: the left-hand side of the rule Action: the right-hand side of the rule Certainty: certainty factor associated with the rule Priority: priority relative to other rules Creator: name of creator Date: creation date Purpose: explanation of rule purpose Comments: additional commentary regarding the rule Since rules are simply another form of knowledge element, they are amenable to internal analysis by other rules and to inclusion in hierarchies within the system. There is no formal distinction between rules that manipulate rule elements and rules that manipulate data elements. This mechanism will be convenient and useful for selecting from a very large set of rules and objects those that constitute interesting subsets for review, correctness checking, or activation. The monitor that controls rule matching, selection and execution can be chosen by the user from a menu of available monitors. If none fits his specifications and he is an experienced programmer he will be able to modify existing monitors or write his own in ROSIE. The default monitor that is available is the ordered monitor. This monitor assumes that priorities have been assigned to each rule; these priorities are often based on the order the rules are entered into the system. A cycle in this system consists of selecting a rule that matches the data and executing it. The highest priority rule that currently matches is the one selected. Once the rule actions are executed (creating the possibility of new rules that match) the cycle starts again. This continues until no rules match or the action STOP is executed. **RULE FORMS** Rules fall into three categories: WHEN-THEN, IF-THEN, and DO. The WHEN-THEN rule cannot fire more than once for each distinct (set of) knowledge element(s) that matches its conditions or left-hand side (LHS). The only way it can fire again on the same element is when the matching value of the element has been changed. This is an example of an event-driven or demon-like rule. This rule has the form shown below. WHEN <conditions> THEN <actions> {ELSE <actions>} example: WHEN THERE IS a ship WHOSE affiliation IS NOT KNOWN AND the DISTANCE BETWEEN the ship AND the U.S. task force IS LESS THAN 30 miles THEN ADD the name OF the ship TO the potential threat list AND SEND the name OF the ship TO the USER In the above example all ships with unknown affiliation and close proximity to the U.S. task force are added to the potential threat list. Each time the rule fires, one new ship is added, i.e., the rule must fire n times to add n ships to the list. After a ship's name is added to the list it is sent to the user. Because the rule fires only on knowledge base changes and only once for each data element no special mechanism is needed to keep the rule from being invoked continuously for the same knowledge elements. The IF-THEN rule is analogous to the standard RITA rule. It is existence-driven; it fires repeatedly as long as the conditions are true, even if the elements matching its conditions have not been changed. The actions are executed once during each monitor cycle; repeated rule firings require repeated cycles. The form of this rule is shown below. IF <conditions> THEN <actions> {ELSE <actions>} example: IF the state OF the system IS "compute relative threat" THEN SET the state OF the system TO "set threat level" AND SET the relative threat OF the system TO (100 * attack density OF the U.S. task force)/ engagement density OF the U.S. task force After the THEN actions have been performed and the conditions are no longer true, rule firing is terminated. The DO rule is analogous to a RITA "immediate action." When used as part of a rule set, it behaves like a rule that is always true, e.g., "IF TRUE THEN <actions> and executes its actions each time it is tested by the monitor. When used alone it has the effect of a command and is executed as soon as it is read by the monitor. It has the form shown below. DO <actions> example: DO ACTIVATE RULESET rs23 AND ACTIVATE DATASET d15 This type of rule permits the user to effectively insert commands into his rule sets. This capability was found to be quite useful in the RITA system. Rule actions can have the following forms. (Note that this list is not exhaustive.) assignment: SET <attribute> OF <name> TO <value> list: PUT <value> INTO <attribute> OF <name> creation: CREATE <item> (creates elements or attributes) deletion: DELETE <name> ; DELETE <attribute> OF <name> I/O: SEND, RECEIVE, OPEN, CLOSE, READFILE termination: STOP ; RETURN rule: WHEN-THEN, IF-THEN, or DO activation: ACTIVATE <rulesets> | DEACTIVATE <rulesets> subroutine: CALL <rulesets> The assignment, list, creation, deletion and I/O actions all correspond to useful actions available in RITA. The use of a rule as an action allows the user to create conditional expressions within the right-hand side (action side) of a rule. Experience with RITA has shown that this capability can significantly reduce the number of rules needed to express certain types of repetitive procedures. The activation and subroutine capabilities facilitate organizing the program in a modular form that is more efficient and easier to debug. More will be said about these capabilities in the next section. Rule conditions have the form of a boolean expression with parentheses for disambiguation, e.g., A & (B v C) & ¬D. The two basic forms of the expression are: <attribute> OF <name> IS <value> THERE IS <name> WHOSE <attribute> IS <value>. Again this list is not exhaustive, as there are many relations other than equality (e.g., greater than, less than, contains, between, etc.) needed to provide the user with a workable set of tools for rule construction. INSTANCE SETS When a rule's left-hand side is tested by the monitor an attempt is made to form the instance set for the rule's condition. The instance set of a boolean expression is the union of all ordered subsets of elements that match the expression, assuming automatic inheritance of attributes for more specific elements, i.e., those lower in the I-link hierarchy tree. This set is then used to instantiate rule variables. The methods used to create and use the instance set depend on the type of monitor being used and the type of rule being executed. The SET action in "SET &lt;attribute&gt; OF &lt;name&gt; TO &lt;value&gt;" stores the new value at the highest level element in the instance set and deletes existing values at lower levels, unless they are flagged for no inheritance. (If necessary, cached values are updated.) In the condition part of a rule the user may explicitly mention the type of knowledge element being sought. For example: IF THERE IS a &lt;name&gt; WHOSE ... IF THERE IS a CONCEPT &lt;name&gt; WHOSE ... IF THERE IS an OBJECT &lt;name&gt; WHOSE ... IF THERE IS an INSTANCE OF CONCEPT &lt;name&gt; WHOSE ... In the "CONCEPT &lt;name&gt;" reference the instance set is the whole tree including the element referred to by &lt;name&gt; and all instances and abstractions under it. The clause "&lt;a &lt;name&gt; WHOSE &lt;attribute&gt; IS &lt;value&gt;" causes a search for the attribute starting at the &lt;name&gt; element and proceeding down the "INSTANCE" hierarchy. If the attribute has still not been found when the objects are reached, the search continues back up the tree above the original <name> node until either the attribute is found or the tree terminates. **CASE PHRASE** The rule "WHEN <conditions> THEN <actions>" executes its actions for one member of the instance set each time it is fired. If it is desired to execute the actions for all members of the instance set during a single rule firing, the rule must be reformulated as "WHEN <conditions> THEN FOR ALL CASES <actions>." The "FOR ALL CASES" phrase may be used with IF-THEN rules in the same manner. The example below illustrates the "FOR ALL CASES" phrase. **RULE 1:** WHEN THERE IS a ship WHOSE speed IS LESS THAN 20 KNOTS THEN SEND the name OF the ship TO the USER **RULE 1a:** WHEN THERE IS a ship WHOSE speed IS LESS THAN 20 KNOTS THEN FOR ALL CASES SEND the name OF the ship TO the USER When rule 1 is tested against the knowledge base and found to be true it is executed only for the first ship in the knowledge base whose speed is less than 20 knots. Thus, other rules are tested and made available for execution before rule 1 necessarily has a chance to fire again for other ships with speeds less than 20 knots. Rule 1a, on the other hand, does not relinquish control to other rules until it has been executed for every ship in the knowledge base with speeds less than 20 knots. This permits the user to write rules that can efficiently iterate through the data when so desired. VARIABLES Variables can be used in rule expressions to represent element names, attributes or values. The variable is identified by being enclosed in brackets, for example: - a ship \([x]\) WHOSE length IS GREATER THAN 150 - an ELEMENT \([x]\) WHOSE ATTRIBUTE \([y]\) IS VALUE \([z]\) - IF THERE IS an ELEMENT \([x]\) THAT IS A MEMBER OF a fleet \([y]\) - THEN SET the status OF \([x]\) TO the status OF \([y]\) The binding of a variable takes place when the variable first occurs in the expression, typically after the defining term. The defining term can be "ELEMENT," "OBJECT," "CONCEPT," or a particular element, object, or concept. The default defining term is "ELEMENT." The scope of the binding is within one rule. DATASETS/ACTIVATION We allow named datasets and the ability to activate or deactivate them, e.g., "ACTIVATE classified-value-table." Since rules and data are treated alike the same activation mechanism is used for both, i.e., rulesets and their rules are just knowledge elements that can be (de)activated like data. Activation can be initiated either by user commands or by rule actions. Of course activating rules is quite different (in terms of effect) from activating data, since the monitor makes a clear distinction between rules and data. Only rule elements can be executed whereas all elements, including other rules, can be matched against the left-hand sides of rules during condition testing. There are two kinds of activation: global and local. Global activation involves defining a permanent operating context, i.e., the set of rules and data currently available for processing. The actions "ACTIVATE" and "DEACTIVATE" will be given the following meaning: "ACTIVATE alpha" means mark all the rules in the set alpha as accessible for current operations; "DEACTIVATE alpha" means mark all the rules as not accessible. This can be applied easily to both rules and objects without distinction. When elements are activated this way by a rule's action they are not available for processing until the execution of the rule has terminated. Activation and deactivation of rules and data is handled uniformly by the actions shown below. Activation adds rules or data in the named set to the active set. ``` ACTIVATE RULESET <name> ACTIVATE DATASET <name> ``` Deactivation removes rules or data from the active set. ``` DEACTIVATE RULESET <name> DEACTIVATE DATASET <name> ``` Local activation involves defining a temporary operating context, i.e., a set of rules and data that are active only while the rule that activated them is still being executed. Thus local activation is analogous to a subroutine call, and the action "USE" will be used to indicate this type of activation. Hence, "USE alpha" means that the rules in alpha become the current active set and all other rules in the system are marked as inaccessible until a "RETURN" action is executed in alpha. The effect of executing the return will be to reinitialize the set of rules that existed prior to the use action. The distinction here is that there is a push and pop stacking mechanism that applies to "USE" and "RETURN," and does not apply to "ACTIVATE" and "DEACTIVATE." The form of the USE action is shown below. USE RULESET <name> There is an open question as to whether "USE" and "RETURN" should apply to data as well as rules. A second more fundamental issue concerns the passing of parameters via the "USE" action to a new rule set. The parameters to be passed should be subject to the push and pop mechanism, along with the set of active rules so that the "USE" mechanism can be used recursively. Datasets can be defined by actions in rules, as illustrated below. ASSIGN <name> TO DATASET <dataset name> example: IF THERE IS an ELEMENT [x] WHOSE type IS "navy" THEN ASSIGN [x] TO DATASET navops ASSIGN <name> TO RULESET <ruleset name> example: IF THERE IS a RULE [x] WHOSE certainty IS > .5 THEN ASSIGN [x] TO RULESET goodrules Dataset definitions add names to the value of the DATASET attribute (D) associated with each data element, e.g., "ASSIGN x TO DATASET navops" sets up the link: \[ \begin{array}{c} \text{<data element x>}\\ \downarrow D\\n(\text{navops}) \end{array} \] where D is a system attribute. However, in the case of ruleset definitions the user will perceive a hierarchy of I and M links as illustrated below. \[ \begin{array}{c} \text{Ruleset} \\ \downarrow \text{I} \\ \text{rs1} \\ \downarrow \text{M} \\ \downarrow \text{R2} \end{array} \quad \begin{array}{c} \text{Rule} \\ \downarrow \text{I} \end{array} \] Thus the user will be able to write rules that make use of inheritance and membership properties with regard to rule characteristics, e.g., IF THERE IS a RULE \([x]\) THAT IS NOT a MEMBER OF a RULESET THEN ASSIGN \([x]\) TO RULESET \(rs1\) V. USER SUPPORT ENVIRONMENT The user support facilities of ROSIE are intended to help the user cope with the special problems of large, rich models. Models of real-world interest may be expected to involve large numbers of rules and data elements—far too many for the model builder or user to comprehend in detail. While user support issues are important in the design of any computing system, they are especially critical in large, complex systems intended for use by non-programmers. Hence, considerable effort has gone into planning effective user support facilities for ROSIE—to provide a friendly and helpful environment for the model builder. In this section, we outline three classes of key user support facilities in functional terms: top-level interface, editing, and model analysis. THE USER'S TOP-LEVEL VIEW OF THE SYSTEM Before the user can approach the substantive tasks of heuristic modeling—building data and rule elements—he must be able to invoke ROSIE from the host operating system and correctly interact with ROSIE. To aid the new user learning about the varied ROSIE features the system will incorporate a tutorial mechanism capable of describing the features and demonstrating how to use them. Thus a new user will be able to interact effectively with ROSIE even if he has a minimal understanding of the basic concepts underlying the ROSIE design. The user must exercise control over system options and features. Thus, the system must have a set of commands that permit the user to control modes, options, file loading, running, interruption of running, resumption of running, trace setting, debugging, and the verbosity with which the system describes its own behavior. The model for these features is the latest implementation of RITA; these RITA functions will be included in the initial design for ROSIE. There is one important top-level command, not present in RITA, that gives the user the ability to activate a particular set of rules and/or data elements from the command level; e.g., ``` USE edit-rules; ``` This gives the user an easy way to isolate for execution a set of editing rules, or correctness-checking rules (see below), that are embedded within a user model. The commands associated with these facilities can be typed directly at the user's terminal, or they can be embedded in loadable ROSIE files to simplify subsequent system initialization. Also provided is the RITA concept of immediate rules--rules that are entered from the user's terminal and are executed at once. These rules differ from system commands in that they dynamically interact with the elements of the user's current model; e.g., IF THERE IS A carrier [c] WHOSE readiness IS low THEN FOR ALL CASES DEACTIVATE [c]; Immediate rules differ from ordinary rules in that they do not become a part of the universe of rules within a user model; they are executed at once and then discarded. Special editing capabilities are described below which further enhance the user's top-level control of the system. EDITING FUNCTIONS The principal tool that the model-builder will use to create and modify elements of his model (rule and data elements) is a text editor. In this role, the editor will carry much of the burden of interaction between the user and ROSIE. The functional properties of the editor must be designed to gracefully and unobtrusively assist the user in his work; those described in the following paragraphs are suggested by a broad sampling of user experiences with the RITA system. The main function of the editing facilities is to facilitate the manual creation or change of rule and data elements. (Rules and data can also be created as a result of rule actions). To support this, it is desirable to provide a sophisticated prompting facility. Prompting should be optional, and should be driven by user-specified templates for the most common kinds of structures in the user's model. The goal of prompting is to save the user from routine typing and to reduce the likelihood of typographical errors. Access to the editor can be invoked manually whenever the model is quiescent, so that rule and data elements can be edited. As an aid to debugging, an existing execution context can be preserved while manual editing occurs; this will allow model execution to be halted for editing and then resumed with no loss of current states, bindings, etc. In addition to manual editing, there are two ways in which the user can build editing aids into the model itself using ROSIE language facilities. First, he can use rules to locate data in the model that needs editing, and package these materials for later manual editing; e.g., ``` IF THERE IS A RULE [r] WHOSE conditions CONTAIN {'carrier'} THEN FOR ALL CASES SEND [r] TO edit-file; ``` will gather up all rules which mention carriers in their left-hand sides and send copies of them to a file. This file can later be edited manually and the modified contents returned to the model. Extending this approach, the user can build rules that actually edit other rule or data elements in the model; e.g., ``` CREATE CONCEPT tf-sub; IF THERE IS A tf-escort [tfe] WHOSE INSTANCE IS submarine [s] THEN FOR ALL CASES REMOVE [s] FROM [tfe] AND INCLUDE [s] IN tf-sub; ``` These rules carry out a systematic change in the user's taxonomy of submarines, moving those formerly categorized as task-force escort vessels into a new category called "tf-sub." If the number of affected elements is large, then this rule would save the user substantial manual editing labor while eliminating the possibility of typing errors. To help the system protect privileged data fields from editing (e.g., the internal object ID field) and as an adjunct to a fairly rich prompting facility, we plan to include partial syntax checking on rules and data so that at least superficial syntax checks can be done before the material is released from the editor, saving time and computing resources. As support for interactive use of the language, the most recent lines typed by the user at his terminal will be captured in a transparent manner. If one or more of these lines proves to be erroneous, resulting in rejection by the system, the user will be able to edit the offending line(s) and resubmit them instead of having to retype the entire sequence. Once again, this should save typing and soften the adverse effects of simple typographical errors. This facility will apply to all interactive inputs, e.g., commands, immediate rules, or prompted responses. MODEL ANALYSIS An important question for any model builder is whether the behavior of the model, in the most general sense, accords with his expectations and needs. Where this is in doubt, the user will want to iterate through cycles of analysis, testing and modification in the hope of arriving at a state of the model that satisfies his goals. This process of model analysis is ordinarily easier for small, sparse models for which the user is able to maintain a more or less complete mental image. For large, rich models, which ROSIE is intended to support, we recognize the importance of supplementing the model builder's intuition with specific tools to assist in the analysis process. In the following subsections we describe three sets of such tools, each of which addresses an important class of analysis problems: consistency in the model, inference with uncertainty, and explanation of results. Consistency Among Rules and Data Elements: An obvious source of anomalous behavior in a model is the presence of collections of rule elements and/or data elements whose members are inconsistent with one another. Here are some simple examples of blatant inconsistencies: (A) OBJECT carrier, NAME Enterprise, LOCATION Pacific, ... ; OBJECT carrier, NAME Enterprise, LOCATION Atlantic, (B) IF THERE IS A red-sub WHOSE range IS LESS THAN 3 THEN USE RULESET antiplane; IF THERE IS A red-sub WHOSE range IS LESS THAN 3 THEN USE RULESET antisub; These cases could have reasonably arisen as a result of clerical error or carelessness in entering new material into the model, either directly from the users terminal or from loadable ROSIE files. Or they might arise from rule-based What can be done about this problem of consistency? The formal issues in evaluating consistency among the elements of complex models can be very deep; there is, in fact, little hope of providing a comprehensive automated solution to the detection or correction of consistency faults. Instead, the system will include approaches to helping the user identify collections of data elements and rules that may embody consistency defects, as well as other sources of faulty behavior in the model. The primary burden of recognizing the defects themselves, and of repairing them, rests with the model builder or user. He has superior human pattern-recognition talents, and may be presumed to possess unique competence to make such judgments of his own model. The goal of system design in this area is to provide good tools for the user to help him in this task. Two kinds of tools will be made available, each focusing on the identification of "families" of rules and/or data elements whose properties may involve consistency or correctness issues. Both rest on the hypothesis that consistency defects of the more tractable kinds are likely to involve collections of rules and data elements with high internal similarity. The members of a family of rules would share key LHS and/or RHS elements; the members of a family of data elements would share attributes and/or values. Simple similarity metrics can be used to construct procedures that recognize similarities among sets of rules. At the "fulcrum" of family discovery, the user will either directly supply examples of key shared materials, or will point to existing rules and/or data-elements that embody them; the system procedures will then collect the members of the implicitly defined family from among the elements of the model, and organize them to simplify the user's review. While the design of these procedures will lead them to err on the side of overinclusiveness, the user will be given control of the similarity threshold employed so he can limit the size of generated families. In addition to the built-in system procedures just described, the user may often be able to create his own procedures for reviewing portions of the model. As in the editing situation, he may create and invoke rulesets whose function is to identify and collect families of related data elements and rules for review. Reasoning in the Presence of Uncertainty: Unlike a system of classical mathematical or logical inference in which all premises and inference rules are assumed perfectly correct and reliable, the elements of a heuristic inference model may differ substantially from one another in reliability or certainty. Some rules and facts will enjoy the user's full confidence; others will have a more questionable status. Also, the user's estimate of particular facts and rules will change with growing knowledge and experience. The main problem this situation poses for the user of a heuristic model is how to estimate the reliability of inferences and predictions which are based upon uncertain information and transformations. The developers of RITA chose to leave this issue entirely to the user as a way of avoiding difficult problems of implementation in a minicomputer environment. The most common approach, among systems which attempt to solve this problem (Shortliffe, 1976) is to: - let the user express his estimate of the reliability of facts and rules on one or more numerical scales, and o provide built-in functions which compute similar estimates for new inferences on the basis of the estimates of the facts and rules used to reach them. But this approach has itself given rise to two new problems. The first concerns the form of the certainty-combining functions to be embedded in the system; it is still a matter for dispute which (if any) of several candidate functions is the theoretically 'correct' or 'best' one. The second problem is simply that none of the candidate functions has met with uniform user satisfaction; users express doubts about the confidence estimates which the system assigns to new inferences—they are irregularly higher or lower than the user himself would like to assign to the same conclusions, sometimes dramatically so. The approach adopted for ROSIE is based on two hypotheses: - the heuristic model builder and users of such models need help in assessing the strength of the model's inferences and predictions, but - present understanding of the logical and psychological bases of heuristic inference is too weak to yield a comprehensive, fully automated solution which users should accept. From these we conclude that the most useful strategy is to provide system support for the kind of inference validation that people routinely employ in coping with the uncertainties of heuristic reasoning in everyday life: a careful review of the evidence. Hence, while ROSIE invites the user to assign "certainty factors" to his rules and facts, the system will not routinely apply special functions for combining these in evaluating new inferences; instead, facilities are provided for locating and reviewing the facts and rules used in reaching them. The final assignment of new certainty factors is then left to the user. However, there will be a few certainty combining packages built into ROSIE for use by sophisticated users who understand their effects and implications. For example, we will supply one very simple yet useful certainty combining function that works as follows. All new data produced will have a certainty equal to the minimum certainty factor (over both rules and data) used to produce it. Certainty factors (CFs) are expressed as numbers, and the user can employ whatever kind of numerical scale the system can support for this purpose. It may be desirable to provide mapping from words representing different degrees of certainty (e.g., HIGH, MEDIUM, LOW) into corresponding numerical values—either point values or interval values. The user will be able to assign a CF to each rule and to the value of every attribute of data elements. For the time being it is assumed that, where the user does not supply a CF, the default CF will be presumed to be unity, or the highest value representing complete certainty. A single user-defined CF scale is used for both rules and data elements. In addition to their primary role in inference evaluation, it may be that CFs, like other components of rules and data elements, can play a role in conflict resolution; it is too early to make a judgement on this issue. The approach to validating conclusions is to assist the user in reviewing the chain of reasoning involved in reaching the conclusions, with particular attention to the weaker premises and rules. A running history of the system's actions will be maintained in a history file; the contents of this file will be used to support post-mortem traces and other debugging functions as well as reviews of inference. ROSIE will contain tools capable of searching this history file, collecting the facts and rules underlying particular inferences, and organizing these materials for the model builder's use. When the model is run, the user will have the option of providing a cutoff point or threshold CF value. The threshold will limit the scope of the data or rules to be considered in the calculation, as only items with CF values equal to or greater than the threshold will be used in the calculation. Thus a threshold of .8 would refer to "all data elements and rules with a CF of .8 or above." If the user prefers to think in terms of more abstract CFs the system will give him a CF test to calibrate his certainty and will map it into a set of linguistic terms, e.g., CERTAIN, HIGH, MEDIUM, LOW. He will then use these values in conversing with the system, although ROSIE will internally use standard numerical values. An inference will be made using all rules and data above the threshold value (which has a default value of 0). If no certainty combining package is specified, all new data produced will have a default certainty of 1, as illustrated in the example below. DATA d1: .8 d2: .9 d3: .6 d4: 1.0 RULES .9 r1: c1 & c2 & c3 --> a1 .8 r2: c3 & c4 & c5 --> a1 .6 r3: c4 & c6 --> a2 If the threshold value is .8 then only d1, d2, d4, r1 and r2 would be used in the calculation. If r1 was properly instantiated by d1, d2, and d4 then a1 would be added to the data with a default CF of 1, and the process would continue. The user will be able to ascertain the true validity of an inference by querying the system about the chain of reasoning used to reach the decision. For example, he might choose to examine: - data and rules by CF (a display of the rules and data involved, ordered by ascending/descending CFs) - weak links (rules/facts with CFs below some user-specified threshold) - initial data (initial rules/facts used in the chain) - intermediate facts (new attribute values created in the course of inference) If he lacks confidence in the decision reached by the system he can change the CFs on the rule or data elements, or change the CF threshold value, and run the model again, repeating this until he obtains a decision he trusts. Explanation of Model Behavior: The work of gaining insight into the behavior of a complex model has static and dynamic aspects. In the preceding sections on editing facilities and consistency checking tools we have outlined the ways in which the ROSIE system can help the user review and modify the component parts of a static model—the rules and data elements that make it up. As a model runs, its rules and data elements interact dynamically with one another and with the system's monitor. The user's concern with heuristic reasoning in the presence of uncertainty deals with one facet of the dynamic interaction among the rules and data elements. In this section, we focus on more general interactions among the rules, the data elements, and the ROSIE monitor. Much of the information useful in explaining the behavior of the system to the user exists in the history file which the system maintains. One key approach to assisting the user to understand the system's behavior, therefore, is based on providing various specialized filters to extract from the history file information which would help the user to understand the system's actions. One type of filter that may be defined on the history file produces an explanation of a chain of reasoning. This mechanism, because it is based upon the general history file, can be used equally well to help explain forward or backward chains of inference. At various user-controlled levels of detail, the system can exhibit chains of inference by rules, by rules and data, by rules and data with bindings, etc. In addition, the explanation facility can make use of annotations on rule and data elements, supplied as attributes by the model builder, to help the user understand sequences of system actions. If the history file internally takes the form of ROSIE data elements, then the full generality of the modeling capability can be applied to it for scanning and other similar activities. Often, the user will require information about the behavior of the system which goes beyond inference schemas. He may want to trace all rules which actually fire, those rules whose left-hand sides were true, those rules whose left-hand sides were merely tested, those which were retrieved for a test. The user may be interested in the reason why the left-hand side was retrieved but failed to be tested, or he may want to know the criterion that was applied to exclude this rule during conflict resolution. Similar concerns may apply for the data elements: which elements have their values tested, which are members of an instance set, or which have their values set. For these purposes, ROSIE will include mechanisms for tracing the system's actions at various levels of detail. The hierarchy among the levels of tracings can be tentatively defined as follows. At the lowest level the user will be told when a designated rule fires, at a higher level, when its left-hand side evaluates to true, and when it fires; at a still higher level, when its left-hand side is being tested, or when it evaluates true, or when it fires; at a higher level still, when the rule has been gathered in the search for eligible left-hand sides, or when it is tested, or is true, or fires. And at the highest level of all, the level of greatest detail, the system will be asked to tell 'everything' about system actions affecting the rule (or data element). The ROSIE system will construct concise English-like descriptions of the system's behavior with respect to the designated elements, so as to avoid imposing on the user the burden of remembering in detail the functions of the monitor: conflict resolution, search strategy, and testing strategies. The material emitted by the system in response to trace or other explanatory commands can be directed by the user either to the user's terminal (for immediate viewing) or to a file (for later use) or both. REFERENCES Artificial Intelligence, Cambridge, Massachusetts, 1977, pp. 833-842. Waterman, D. A., and F. Hayes-Roth, Pattern-directed Inference
{"Source-Url": "https://www.rand.org/content/dam/rand/pubs/notes/2006/N1158-1.pdf", "len_cl100k_base": 14356, "olmocr-version": "0.1.53", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 76715, "total-output-tokens": 18235, "length": "2e13", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.0012788772583007812, "__label__crime_law": 0.0006055831909179688, "__label__education_jobs": 0.01220703125, "__label__entertainment": 0.0001837015151977539, "__label__fashion_beauty": 0.0002651214599609375, "__label__finance_business": 0.0012693405151367188, "__label__food_dining": 0.0004246234893798828, "__label__games": 0.0009360313415527344, "__label__hardware": 0.0013790130615234375, "__label__health": 0.0005588531494140625, "__label__history": 0.0007996559143066406, "__label__home_hobbies": 0.0002701282501220703, "__label__industrial": 0.0011320114135742188, "__label__literature": 0.0009217262268066406, "__label__politics": 0.0005402565002441406, "__label__religion": 0.0005431175231933594, "__label__science_tech": 0.400146484375, "__label__social_life": 0.0002715587615966797, "__label__software": 0.056640625, "__label__software_dev": 0.51806640625, "__label__sports_fitness": 0.00027489662170410156, "__label__transportation": 0.0007920265197753906, "__label__travel": 0.00022840499877929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74021, 0.01528]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74021, 0.58423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74021, 0.92178]], "google_gemma-3-12b-it_contains_pii": [[0, 263, false], [263, 767, null], [767, 1113, null], [1113, 1113, null], [1113, 1976, null], [1976, 1976, null], [1976, 3363, null], [3363, 4701, null], [4701, 7150, null], [7150, 7150, null], [7150, 8477, null], [8477, 9612, null], [9612, 11079, null], [11079, 12660, null], [12660, 13934, null], [13934, 14629, null], [14629, 16074, null], [16074, 17516, null], [17516, 19083, null], [19083, 20595, null], [20595, 22020, null], [22020, 23147, null], [23147, 23704, null], [23704, 25082, null], [25082, 25885, null], [25885, 27240, null], [27240, 28389, null], [28389, 29171, null], [29171, 30371, null], [30371, 31139, null], [31139, 31863, null], [31863, 32525, null], [32525, 33459, null], [33459, 34341, null], [34341, 35552, null], [35552, 36961, null], [36961, 38426, null], [38426, 39516, null], [39516, 40661, null], [40661, 42126, null], [42126, 43585, null], [43585, 44917, null], [44917, 46267, null], [46267, 47479, null], [47479, 48364, null], [48364, 48464, null], [48464, 49838, null], [49838, 51116, null], [51116, 52500, null], [52500, 53783, null], [53783, 54986, null], [54986, 56241, null], [56241, 57451, null], [57451, 58914, null], [58914, 60160, null], [60160, 61607, null], [61607, 63191, null], [63191, 64735, null], [64735, 65680, null], [65680, 67056, null], [67056, 68630, null], [68630, 69793, null], [69793, 71836, null], [71836, 73805, null], [73805, 74021, null], [74021, 74021, null], [74021, 74021, null]], "google_gemma-3-12b-it_is_public_document": [[0, 263, true], [263, 767, null], [767, 1113, null], [1113, 1113, null], [1113, 1976, null], [1976, 1976, null], [1976, 3363, null], [3363, 4701, null], [4701, 7150, null], [7150, 7150, null], [7150, 8477, null], [8477, 9612, null], [9612, 11079, null], [11079, 12660, null], [12660, 13934, null], [13934, 14629, null], [14629, 16074, null], [16074, 17516, null], [17516, 19083, null], [19083, 20595, null], [20595, 22020, null], [22020, 23147, null], [23147, 23704, null], [23704, 25082, null], [25082, 25885, null], [25885, 27240, null], [27240, 28389, null], [28389, 29171, null], [29171, 30371, null], [30371, 31139, null], [31139, 31863, null], [31863, 32525, null], [32525, 33459, null], [33459, 34341, null], [34341, 35552, null], [35552, 36961, null], [36961, 38426, null], [38426, 39516, null], [39516, 40661, null], [40661, 42126, null], [42126, 43585, null], [43585, 44917, null], [44917, 46267, null], [46267, 47479, null], [47479, 48364, null], [48364, 48464, null], [48464, 49838, null], [49838, 51116, null], [51116, 52500, null], [52500, 53783, null], [53783, 54986, null], [54986, 56241, null], [56241, 57451, null], [57451, 58914, null], [58914, 60160, null], [60160, 61607, null], [61607, 63191, null], [63191, 64735, null], [64735, 65680, null], [65680, 67056, null], [67056, 68630, null], [68630, 69793, null], [69793, 71836, null], [71836, 73805, null], [73805, 74021, null], [74021, 74021, null], [74021, 74021, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74021, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74021, null]], "pdf_page_numbers": [[0, 263, 1], [263, 767, 2], [767, 1113, 3], [1113, 1113, 4], [1113, 1976, 5], [1976, 1976, 6], [1976, 3363, 7], [3363, 4701, 8], [4701, 7150, 9], [7150, 7150, 10], [7150, 8477, 11], [8477, 9612, 12], [9612, 11079, 13], [11079, 12660, 14], [12660, 13934, 15], [13934, 14629, 16], [14629, 16074, 17], [16074, 17516, 18], [17516, 19083, 19], [19083, 20595, 20], [20595, 22020, 21], [22020, 23147, 22], [23147, 23704, 23], [23704, 25082, 24], [25082, 25885, 25], [25885, 27240, 26], [27240, 28389, 27], [28389, 29171, 28], [29171, 30371, 29], [30371, 31139, 30], [31139, 31863, 31], [31863, 32525, 32], [32525, 33459, 33], [33459, 34341, 34], [34341, 35552, 35], [35552, 36961, 36], [36961, 38426, 37], [38426, 39516, 38], [39516, 40661, 39], [40661, 42126, 40], [42126, 43585, 41], [43585, 44917, 42], [44917, 46267, 43], [46267, 47479, 44], [47479, 48364, 45], [48364, 48464, 46], [48464, 49838, 47], [49838, 51116, 48], [51116, 52500, 49], [52500, 53783, 50], [53783, 54986, 51], [54986, 56241, 52], [56241, 57451, 53], [57451, 58914, 54], [58914, 60160, 55], [60160, 61607, 56], [61607, 63191, 57], [63191, 64735, 58], [64735, 65680, 59], [65680, 67056, 60], [67056, 68630, 61], [68630, 69793, 62], [69793, 71836, 63], [71836, 73805, 64], [73805, 74021, 65], [74021, 74021, 66], [74021, 74021, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74021, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
203131b8f25bdfa3ffd7127c5f8cd5d9d5a08cbf
Malacology: A Programmable Storage System Michael A. Sevilla†, Noah Watkins†, Ivo Jimenez, Peter Alvaro, Shel Finkelstein, Jeff LeFevre, Carlos Maltzahn University of California, Santa Cruz {msevilla, jayhawk, ivo}@soe.ucsc.edu, {palvaro, shel}@ucsc.edu, {jlefevre, carlosm}@soe.ucsc.edu Abstract Storage systems need to support high-performance for special-purpose data processing applications that run on an evolving storage device technology landscape. This puts tremendous pressure on storage systems to support rapid change both in terms of their interfaces and their performance. But adapting storage systems can be difficult because unprincipled changes might jeopardize years of code-hardening and performance optimization efforts that were necessary for users to entrust their data to the storage system. We introduce the programmable storage approach, which exposes internal services and abstractions of the storage stack as building blocks for higher-level services. We also build a prototype to explore how existing abstractions of common storage system services can be leveraged to adapt to the needs of new data processing systems and the increasing variety of storage devices. We illustrate the advantages and challenges of this approach by composing existing internal abstractions into two new higher-level services: a file system metadata load balancer and a high-performance distributed shared-log. The evaluation demonstrates that our services inherit desirable qualities of the back-end storage system, including the ability to balance load, efficiently propagate service metadata, recover from failure, and navigate trade-offs between latency and throughput using leases. CCS Concepts • Information systems → Distributed storage; • Software and its engineering → File systems management; Software functional properties Keywords Distributed Storage, Programmability, Ceph Figure 1: Scalable storage systems have storage daemons which store data, monitor daemons (M) that maintain cluster state, and service-specific daemons (e.g., file system metadata servers). Malacology enables the programmability of internal abstractions (bold arrows) to re-use and compose existing subsystems. With Malacology, we built new higher-level services, ZLog and Mantle, that sit alongside traditional user-facing APIs (file, block, object). 1. Introduction A storage system implements abstractions designed to persistently store data and must exhibit a high level of correctness to prevent data loss. Storage systems have evolved around storage devices that often were orders of magnitude slower than CPU and memory, and therefore could dominate overall performance if not used carefully. Over the last few decades members of the storage systems community have developed clever strategies to meet correctness requirements while somewhat hiding the latency of traditional storage media [12]. To avoid lock-in by a particular vendor, users of storage systems have preferred systems with highly standardized APIs and lowest common denominator abstract data types such as blocks of bytes and byte stream files [4]. A number of recent developments have disrupted traditional storage systems. First, the falling prices of flash storage and the availability of new types of non-volatile memory that are orders of magnitude faster than traditional spinning media are moving overall performance bottlenecks away from storage devices to CPUs and networking, and pressure storage systems to shorten their code paths and incorporate new optimizations [21, 22]. Second, emerging “big data” applications demand interface evolution to support flexible consistency as well as flexible structured data representa- † These authors contributed equally to this work. Contribution 1: We define a programmable storage system to be a storage system that facilitates the re-use and extension of existing storage abstractions provided by the underlying software stack, to enable the creation of new services via composition. A programmable storage system can be realized by exposing existing functionality (such as file system and cluster metadata services and synchronization and monitoring capabilities) as interfaces that can be “glued together” in a variety of ways using a high-level language. Programmable storage differs from active storage [35]—the injection and execution of code within a storage system or storage device—in that the former is applicable to any component of the storage system, while the latter focuses at the data access level. Given this contrast, we can say that active storage is an example of how one internal component (the storage layer) is exposed in a programmable storage system. To illustrate the benefits and challenges of this approach we have designed and evaluated Malacology, a programmable storage system that facilitates the construction of new services by re-purposing existing subsystem abstractions of the storage stack. We build Malacology in Ceph, a popular open source software storage stack. We choose Ceph to demonstrate the concept of programmable storage because it offers a broad spectrum of existing services, including distributed locking and caching services provided by file system metadata servers, durability and object interfaces provided by the back-end object store, and propagation of consistent cluster state provided by the monitoring service (see Figure 1). Malacology is expressive enough to provide the functionality necessary for implementing new services. Malacology includes a set of interfaces that can be used as building blocks for constructing novel storage abstractions, including: 1. An interface for managing strongly-consistent time-varying service metadata. 2. An interface for installing and evolving domain-specific, cluster-wide data I/O functionality. 3. An interface for managing access to shared resources using a variety of optimization strategies. 4. An interface for load balancing resources across the cluster. 5. An interface for durability that persists policies using the underlying storage stack’s object store. Contribution 2: We implement two distributed services using Malacology to demonstrate the feasibility of the programmable storage approach: 1. A high-performance distributed shared log service called ZLog, that is an implementation of CORFU [6] 2. An implementation of Mantle, the programmable load balancing service [37] The remainder of this paper is structured as follows. First, we describe and motivate the need for programmable storage by describing current practices in the open source software community. Next we describe Malacology by presenting the subsystems within the underlying storage system that we re-purpose, and briefly describe how those system are used within Malacology (Section 4). Then we describe the services that we have constructed in the Malacology framework (Section 5), and evaluate our ideas within our prototype implementation (Section 6). We conclude by discussing future and related work. 2. Application-Specific Storage Stacks Building storage stacks from the ground up for a specific purpose results in the best performance. For example, GFS [18] and HDFS [38] were designed specifically to serve MapReduce and Hadoop jobs, and use techniques like exposing data locality and relaxing POSIX constraints to achieve application-specific I/O optimizations. Another example is Boxwood [32], which experimented with B-trees and chunk stores as storage abstractions to simplify application building. Alternatively, general-purpose storage stacks are built with the flexibility to serve many applications by providing standardized interfaces and tunable parameters. Unfortunately, managing competing forces in these systems is difficult and users want more control from the general-purpose storage stacks without going as far as building their storage system from the ground up. To demonstrate a recent trend towards more application-specific storage systems we examine the state of programmability in Ceph. Something of a storage Swiss army knife, Ceph simultaneously supports file, block, and object interfaces on a single cluster [1]. Ceph’s Reliable Autonomus Distributed Object Storage (RADOS) system is a cluster of object storage daemons that provide Ceph with data durability and integrity using replication, erasure-coding, and scrubbing [50]. Ceph already provides some degree of pro- grammability; the object storage daemons support domain- specific code that can manipulate objects on the server that has the data local. These “interfaces” are implemented by composing existing low-level storage abstractions that exe- cute atomically. They are written in C++ and are statically loaded into the system. The Ceph community provides empirical evidence that developers are already beginning to embrace programmable storage. Figure 2 shows a dramatic growth in the produc- tion use of domain-specific interfaces in the Ceph commu- nity since 2010. In that figure, classes are functional group- ings of methods on storage objects (e.g. remotely computing and caching the checksum of an object extent). What is most remarkable is that this trend contradicts the notion that API changes are a burden for users. Rather it appears that gaps in existing interfaces are being addressed through ad-hoc ap- proaches to programmability. In fact, Table 1 categorizes ex- isting interfaces and we clearly see a trend towards reusable services. Figure 2: [source] Since 2010, the growth in the number of co-designed object storage interfaces in Ceph has been accelerating. This plot is the number of object classes (a group of interfaces), and the total number of methods (the actual API end-points). <table> <thead> <tr> <th>Category</th> <th>Example</th> <th>#</th> </tr> </thead> <tbody> <tr> <td>Logging</td> <td>Geographically distribute replicas</td> <td>11</td> </tr> <tr> <td></td> <td>Geographically distribute replicas</td> <td>11</td> </tr> <tr> <td>Metadata</td> <td>Snapshots in the block device OR</td> <td>74</td> </tr> <tr> <td>Management</td> <td>Scan extents for file system repair</td> <td></td> </tr> <tr> <td>Locking</td> <td>Grants clients exclusive access</td> <td>6</td> </tr> <tr> <td>Other</td> <td>Garbage collection, reference counting</td> <td>4</td> </tr> </tbody> </table> Table 1: A variety of object storage classes exist to expose interfaces to applications. # is the number of methods that implement these categories. The takeaway from Figure 2 is that programmers are already trying to use programmability because their needs, whether they be related to performance, availability, consis- tency, convenience, etc., are not satisfied by the existing de- fault set of interfaces. The popularity of the custom object in- terface facility of Ceph could be due to a number of reasons, such as the default algorithms/tunables of the storage system being insufficient for the application’s performance goals, programmers wanting to exploit application-specific seman- tics, and/or programmers knowing how to manage resources to improve performance. A solution based on application- specific object interfaces is a way to work around the tradi- tionally rigid storage APIs because custom object interfaces give programmers the ability to tell the storage system about their application: if the application is CPU or I/O bound, if it has locality, if its size has the potential to overload a single node, etc. Programmers often know what the problem is and how to solve it, but until the ability to modify object inter- faces, they had no way to express to the storage system how to handle their data. Our approach is to expose more of the commonly used, code-hardened subsystems of the underlying storage sys- tems as interfaces. The intent is that these interfaces, which can be as simple as a redirection to the persistent data store or as complicated as a strongly consistent directory service, should be used and re-used in many contexts to implement a wide range of services. By making programmability a ‘feature’, rather than a ‘hack’ or ‘workaround’, we help standardize a development process that now is largely ad- hoc. 3. Challenges Implementing the infrastructure for programmability into existing services and abstractions of distributed storage sys- tems is challenging, even if one assumes that the source code of the storage system and the necessary expertise for under- standing it is available. Some challenges include: - Storage systems are generally required to be highly avail- able so that any complete restarts of the storage system to reprogram them is usually unacceptable. - Policies and optimizations are usually hard-wired into the services and one has to be careful when factoring them to avoid introducing additional bugs. These poli- cies and optimizations are usually cross-cutting solutions to concerns or trade-offs that cannot be fully explored at the time the code is written (as they relate to work- load or hardware). Given these policies and optimiza- tions, decomposition of otherwise orthogonal internal ab- stractions can be difficult or dangerous. - Mechanisms that are often only exercised according to hard-wired policies and not in their full generality have hidden bugs that are revealed as soon as those mecha- nisms are governed by different policies. In our experi- ence introducing programmability into a storage system proved to be a great debugging tool. - Programmability, especially in live systems, implies changes that need to be carefully managed by the sys- tem itself, including versioning and propagation of those changes without affecting correctness. To address these challenges we present Malacology, our prototype programmable storage system. It uses the programmable storage design approach to evolve storage systems efficiently and without jeopardizing years of code-hardening and performance optimization efforts. Although Malacology uses the internal abstractions of the underlying storage system, including its subsystems, components, and implementations, we emphasize that our system still addresses the general challenges outlined above. The main challenge of designing a programmable storage system is choosing the right internal abstractions and picking the correct layers for exposing them. A programmable storage system is not defined by what abstractions are exposed, rather a programmable storage system adheres to the design approach of exposing interfaces so administrators can have better control of the storage system. The interfaces presented in this paper are abstractions that we found useful for building our prototype services ZLog and Mantle, yet they may not provide the best trade-offs for all higher-level services. For example, if consensus is correctly exposed one could implement high-level features like versioning, serialization, or various flavors of strongly consistent data management on top; but perhaps a low-level consensus interface is suited well for a particular set of applications. These questions are not answered in this paper and instead we focus on showing the feasibility of building such a system, given advances in the quality and robustness of today’s storage stacks. The Malacology prototype we present has been implemented on Ceph. While there are other systems on top of which Malacology could be implemented (see Table 2), we choose Ceph because it is a production quality system and because it is open source. The large developer community ensures that code is robust and the visibility of the code lets us expose any interface we want. In the next section we describe the Ceph components that we expose as Malacology interfaces. 4. Malacology: A Programmable Storage System The guiding principle is to re-use existing services and compose them so that these services can be programmed. We accomplish programmability of a service by exporting bindings for an interpreted programming language so that programming can occur without having to restart the storage system (see also below, Section 4.4). Table 2 shows the internal services from Ceph that we expose in the Malacology prototype via Lua [26] bindings and Figure 3 compares what was already present in Ceph (gray boxes) to the Malacology interfaces we added. Section 5 will describe the higher-level services we built with these interfaces. Lua is a portable embedded scripting language and we choose it as the interpreted language for Malacology because it offers superior performance and productivity trade-offs, including a JIT-based implementation that is well known for near native performance. Additionally, Lua has been used extensively in game engines, and systems research [45], including storage systems where it has been effectively used both on [17, 20, 48] and off [37] the performance critical path. Finally, the flexibility of the runtime allows execution sandboxing in order to address security and performance concerns. We will now discuss the common subsystems used to manage storage systems and how Malacology makes them programmable. 4.1 Service Metadata Interface Keeping track of state in a distributed system is an essential part of any successful service and a necessary component in order to diagnose and detect failures, when they occur. This is further complicated by variable propagation delays and heterogeneous hardware in dynamic environments. Service metadata is information about the daemons in the system and includes membership details, hardware layout (e.g., racks, power supplies, etc.), data layout, and daemon state and configuration. It differs from traditional file system metadata which is information about files. For the rest of the paper when we use the phrase “metadata server” or “metadata service”, we are referring to the daemon(s) that manages file system metadata (not service metadata). Existing Ceph Implementation: a consistent view of cluster state among server daemons and clients is critical to provide strong consistency guarantees to clients. Ceph maintains cluster state information in per-subsystem data structures called “maps” that record membership and status information. A Paxos [30] monitoring service is responsible for integrating state changes into cluster maps, responding to requests from out-of-date clients and synchronizing members of the cluster whenever there is a change in a map so that they all observe the same system state. As a fundamental building block of many system designs, consensus abstractions such as Paxos are a common technique for maintaining consistent data versions, and are a useful system to expose. The default behavior of the monitor can be seen as a Paxos-based notification system, similar to the one introduced in [13], allowing clients to identify when new values (termed epochs in Ceph) are associated to given maps. Since Ceph does not expose this service directly, as part of our Malacology implementation, we expose a key-value service designed for managing service metadata that is built on top of the consensus engine. Since the monitor is intended to be out of the high-performance I/O path, a general guideline is to make use of this functionality infrequently and to assign small values to maps. **Malacology:** we expose a strongly-consistent view of time-varying service metadata as an interface rather than a hidden internal component. This is shown in Figure 4, where object interfaces and load balancer policies use the Service Metadata interface. Malacology provides a generic API for adding arbitrary values to existing subsystem cluster maps. As a consequence of this, applications can define simple but useful service-specific logic to the strongly-consistent interface, such as authorization control (just specific clients can write new values) or triggering actions based on specific values (e.g. sanitize values). The higher-level services we implement in Section 5 make use of this functionality to register, version and propagate dynamic code (Lua scripts) for new object interfaces defined in storage daemons (Section 4.2) and policies in the load balancer (Section 4.3). Using this service guarantees that interface definitions are not only made durable, but are transparently and consistently propagated throughout the cluster so that clients are properly synchronized with the latest interfaces. **Impact:** provides core functionality because it lets daemons come to a consensus on system critical state. Bugs in the internal subsystems or omitting this from services that need this type of consistency affects correctness. ### 4.2 Data I/O Interface Briefly described in Section 2, Ceph supports application-specific object interfaces [50]. The ability to offload computation can reduce data movement, and transactional interfaces can significantly simplify construction of complex storage interfaces that require uncoordinated parallel access. To address these concerns, Malacology takes advantage of Lua extensions contributed by the Ceph community. This allows new object interfaces to be dynamically loaded into the system and modified at runtime, resulting in a object storage API with economy of expression, which at the same time provides the full set of features of the original object interface implementation. New object interfaces that are expressed in thousands of lines of code can be implemented in approximately an order of magnitude less code [17]. While the use of Lua does not prevent deployment of malicious code, certain types of coding mistakes can be handled gracefully, and access policies are used to limit access to trusted users [26]. Impact: helps applications optimize performance by pushing behavior to lower parts of the storage stack, thereby minimizing hops and distributing computation. 4.3 Distributed Metadata Interfaces File systems provide clients with the familiar POSIX file abstraction. While this guarantees strong consistency it comes at the cost of scalability, increased complexity, and lower performance. In general, distributed file systems protect resources by providing hierarchical indexing and distributed locking services. 4.3.1 Shared Resource Interface File system metadata servers manage client sessions, allowing clients to obtain locks (e.g. file byte ranges), and capabilities (e.g. to cache file data). Clients and metadata servers use a cooperative protocol in which clients voluntarily release resources back to the file system metadata service in order to implement sharing policies. Existing Ceph Implementation: the locking service implements a capability-based system that expresses what data and file system metadata clients are allowed to access as well as what state they may cache and modify locally. While designed for the file abstraction, indexing, locking, and caching are all common services that are useful to a broad spectrum of applications. Distributed applications that share centralized resources (e.g. a database or directory) face similar challenges which are often solved using application-specific sharding. Malacology: while the current policy for sharing access and voluntarily releasing resources is largely best-effort, Malacology supports generalized policies between metadata servers and clients that can be used to implement fairness or priority. Impact: provides core functionality to protect and provide exclusive access for any shared resource. May hurt performance if the resource in question does not require strong consistency. 4.3.2 File Type Interface Applications that manage large amounts of file system metadata (e.g. users or database snapshots) often require a naming service. The metadata service exposes a POSIX file system hierarchy where files and directories are represented as inode data structures. Existing Ceph Implementation: CephFS is the POSIX compliant file system that uses Ceph. Inodes are quite large (1KB for an inode, 400 bytes for a directory entry, and 700 bytes for a directory) and contain CephFS-specific policies like how to stripe data across RADOS. Malacology: allows new inode types to be defined such that applications can create domain-specific interfaces to inodes that may modify locking and capability policies. We will show how this is used in Section 5.2.1 when we discuss a distributed shared-log built on Malacology. Impact: this interface is both a feature and a performance optimization. It is a feature because it allows developers to add support for different storage types, such as how to read new file formats or what consistency semantics to use for a specific subtree in the hierarchical namespace. It is also a performance optimization because future programmers can add optimizations for processing specific types of files into the inode itself. 4.3.3 Load Balancing Interface Many large scale storage systems separate file system metadata and data I/O so that the corresponding services can scale independently. Metadata requests transfer small amounts of data and they happen relatively frequently so many systems employ separate file system metadata clusters. Existing Ceph Implementation: addresses the challenge of balancing file system metadata load with a separate metadata cluster. This cluster uses load balancing policies to migrate directory inodes around the cluster to alleviate overloaded servers [49]. The policies use metrics based on system state (e.g. CPU and memory utilization) and statistics collected by the cluster (e.g. the popularity of an inode). Ceph uses dynamic subtree partitioning to move variable sized namespace subtrees. These units can be shipped anywhere (i.e., to any metadata server of any capacity) at any time for any reason. The original balancer was designed with hard-coded policies and tunables. Malacology: the existing load balancing mechanisms are exposed through an API and programmers can customize the behavior through a domain specific language. These mechanisms include the ability to migrate, partition, and measure load. Using the Service Metadata and Durability interfaces, this Load Balancing interface can safely version bal- ancer policies, save balancer policies in the back-end object store and centralize warnings/errors. When combined with the File Type interface, the Load Balancing interface can express policies for handling a variety of multi-tenant workloads. **Impact:** helps applications optimize performance by allowing them to specify how to partition, replicate, and distribute metadata in response to overloaded servers. ### 4.4 Durability Interface Object stores protect data using techniques like erasure coding, replication, and data scrubbing. For scalability, many of these features are implemented using a peer-to-peer protocol that allows object storage daemons to operate autonomously without a centralized coordinator. **Existing Ceph Implementation:** provides storage by stripping and replicating data across RADOS [50], the reliable distributed object store. RADOS protects data using common techniques such as erasure coding, replication, and scrubbing. For example, when the number of placement groups changes, the object storage daemons re-balance and re-shard data in the background in a process called placement group splitting. During placement group splitting, object storage daemons communicate directly with each other to converge on a new data layout. In order to reduce load on the monitoring service, the object storage daemons use a gossip protocol to efficiently propagate changes to cluster maps throughout the system, and autonomously initiate recovery mechanisms when failures are discovered. **Malacology:** metadata service policies and object storage interfaces are stored durability within RADOS and are managed by storing references in the object server daemon maps. Since the cluster already propagates a consistent view of these data structures, we use this service to automatically install interfaces in object storage daemons, and install policies within the metadata server daemons such that clients and daemons are synchronized on correct implementations without restarting. **Impact:** this is a feature because it adds data safety and persistence to system metadata; while nice to have it does not necessarily effect correctness. ### 5. Services Built on Malacology In this section we describe two services built on top of Malacology. The first is Mantle, a framework for dynamically specifying file system metadata load balancing policies. The second system, ZLog, is a high-performance distributed shared-log. In addition to these services, we will demonstrate how we combine ZLog and Mantle to implement service-aware metadata load balancing policies. #### 5.1 Mantle: Programmable Load Balancer Mantle [37] is a programmable load balancer that separates the metadata balancing policies from their mechanisms. Administrators inject code to change how the metadata cluster distributes metadata. Our previous work showed how to use Mantle to implement a single node metadata service, a distributed metadata service with hashing, and a distributed metadata service with dynamic subtree partitioning. The original implementation was “hard-coded” into Ceph and lacked robustness (no versioning, durability, or policy distribution). Re-implemented using Malacology, Mantle now enjoys (1) the versioning provided by Ceph’s monitor daemons and (2) the durability and distribution provided by Ceph’s reliable object store. Re-using the internal abstractions with Malacology resulted in a $2 \times$ reduction in source code compared to the original implementation. ##### 5.1.1 Versioning Balancer Policies Ensuring that the version of the current load balancer is consistent across the physical servers in the metadata cluster was not addressed in the original implementation. The user had to set the version on each individual server and it was trivial to make the versions inconsistent. Maintaining consistent versions is important for cooperative balancing policies, where local decisions are made assuming properties about other instances in the cluster. With Malacology, Mantle stores the version of the current load balancer in the Service Metadata interface. The version of the load balancer corresponds to an object name in the balancing policy. Using the Service Metadata interface means Mantle inherits the consistency of Ceph’s internal monitor daemons. The user changes the version of the load balancer using a new CLI command. ##### 5.1.2 Making Balancer Policies Durable The load balancer version described above corresponds to the name of an object in RADOS that holds the actual Lua balancing code. When metadata server nodes start balancing load, they first check the latest version from the metadata server map and compare it to the balancer they have loaded. If the version has changed, they dereference the pointer to the balancer version by reading the corresponding object in RADOS. This is in contrast to the original Mantle implementation which stored load balancer code on the local file system – a technique which is unreliable and may result in silent corruption. The balancer pulls the Lua code from RADOS synchronously; asynchronous reads are not possible because of the architecture of the metadata server. The synchronous behavior is not the default behavior for RADOS operations, so we achieve this with a timeout: if the asynchronous read does not come back within half the balancing tick interval the operation is canceled and a Connection Timeout error is This design allows Mantle to immediately return an error if anything RADOS-related goes wrong. We use this implementation because we do not want to do a blocking object storage daemon read from inside the global metadata server lock. Doing so would bring down the metadata server cluster if any of the object storage daemons are not responsive. Storing the balancers in RADOS is simplified by the use of an interpreted language for writing balancer code. If we used a language that needs to be compiled, like the C++ object classes in the object storage daemon, we would need to ensure binary compatibility, which is complicated by different operating systems, distributions, and compilers. 5.1.3 Logging, Debugging, and Warnings In the original implementation, Mantle would log all errors, warnings, and debug messages to a log stored locally on each metadata server. To get the simplest status messages or to debug problems, the user would have to log into each metadata server individually, look at the logs, and reason about causality and ordering. With Malacology, Mantle re-uses the centralized logging features of the monitoring service. Important errors, warnings, and info messages are collected by the monitoring subsystem and appear in the monitor cluster log so instead of users going to each node, they can watch messages appear at the monitor daemon. Messages are logged sparingly, so as not to overload the monitor with frivolous debugging but important events, like balancer version changes or failed subsystems, show up in the centralized log. 5.2 ZLog: A Fast Distributed Shared Log The second service implemented on Malacology is ZLog, a high-performance distributed shared-log that is based on the CORFU protocol [6]. The shared-log is a powerful abstraction used to construct distributed systems, such as metadata management [7] and elastic database systems [8–10]. However, existing implementations that rely on consensus algorithms such as Paxos funnel I/O through a single point introducing a bottleneck that restricts throughput. In contrast, the CORFU protocol is able to achieve high throughput using a network counter called a sequencer, that decouples log position assignment from log I/O. While a full description of the CORFU system is beyond the scope of this paper, we briefly describe the custom storage device interface, sequencer service, and recovery protocol, and how these services are instantiated in the Malacology framework. 5.2.1 Sequencer High-performance in CORFU is achieved using a sequencer service that assigns log positions to clients by reading from a volatile, in-memory counter which can run at a very high throughput and at low latency. Since the sequencer is centralized, ensuring serialization in the common case is trivial. The primary challenge in CORFU is handling the failure of the sequencer in a way that preserves correctness. Failure of the sequencer service in CORFU is handled by a recovery algorithm that recomputes the new sequencer state using a CORFU-specific custom storage interface to discover the tail of the log, while simultaneously invalidating stale client requests using an epoch-based protocol. Sequencer interface. The sequencer resource supports the ability to read() the current tail value and get the next() position in the log which also atomically increments the tail position. We implement the sequencer service in Malacology using the File Type interface. This approach has the added benefit of allowing the metadata service to handle naming, by representing each sequencer instance in the standard POSIX hierarchical namespace. The primary challenge in mapping the sequencer resource to the metadata service is handling serialization correctly to maintain the global ordering provided by the CORFU protocol. Initially we sought to directly model the sequencer service in Ceph as a non-exclusive, non-cacheable resource, forcing clients to perform a round-trip access to the resource at the authoritative metadata server for the sequencer inode. Interestingly, we found that the capability system in Ceph reduces metadata service load by allowing clients that open a shared file to temporarily obtain an exclusive cached copy of the resource, resulting in a round-robin, best-effort batching behavior. When a single client is accessing the sequencer resource it is able to increment the sequencer locally. Any competing client cannot query the sequencer until the metadata service has granted it access. While unexpected, this discovery allowed us to explore an implementation strategy that we had not previously considered. In particular, for bursty clients, and clients that can tolerate increased latency, this mode of operation may allow a system to achieve much higher throughput than a system with a centralized sequencer service. We utilize the programmability of the metadata service to define a new policy for handling capabilities that controls the amount of time that clients are able to cache the sequencer resource. This allows an administrator or application to control the trade-off between latency and throughput beyond the standard best-effort policy that is present in Ceph by default. In Section 6 we quantify the trade-offs of throughput and latency for an approach based on a round-robin batching mode, and compare this mode to one in which the metadata server mediates access to the sequencer state when it is being shared among multiple clients. Quantifying these trade-offs should provide administrators with guidelines for setting the tunables for different “caching” modes of the sequencer. Balancing policies. As opposed to the batching mode for controlling access to the sequencer resource, more predictable latency can be achieved by treating the sequencer inode as a shared non-cacheable resource, forcing clients to make a round-trip to the metadata service. However, the shared nature of the metadata service may prevent the sequencer from achieving maximum throughput. To address this issue we use the Load Balancing interface to construct a service-specific load balancing policy. As opposed to a balancing policy that strives for uniform load distribution, a ZLog-specific policy may utilize knowledge of inode types to migrate the sequencer service to provisioned hardware during periods of contention or high demand. 5.2.2 Storage Interface The storage interface is a critical component in the CORFU protocol. Clients independently map log positions that they have obtained from the sequencer service (described in detail in the next section) onto storage devices, while storage devices provide an intelligent write-once, random read interface for accessing log entries. The key to correctness in CORFU lies with the enforcement of up-to-date epoch tags on client requests; requests tagged with out-of-date epoch values are rejected, and clients are expected to request a new tail from the sequencer after refreshing state from an auxiliary service. This mechanism forms the basis for sequencer recovery. In order to repopulate the sequencer state (i.e. the cached, current tail of the log) during recovery of a sequencer, the maximum position in the log must be obtained. To do this, the storage interface exposes an additional seal method that atomically installs a new epoch value and returns the maximum log position that has been written. Since the sequencer service does not resume until the recovery process has completed, there cannot be a race with clients appending to the log, and the immutability of the log allow reads to never block during a sequencer failure. Recovery of the sequencer process itself may be handled in many ways, such as leader election using an auxiliary service like Paxos. In our implementation, the recovery is the same as (and is inherited from) the CephFS metadata service. Handling the failure of a client that holds the sequencer state is similar, although a timeout is used to determine when a client should be considered unavailable. 6. Evaluation Our evaluation demonstrates the feasibility of building new service abstractions atop programmable storage, focusing on the performance of the internal abstractions exposed by Malacology and used to construct the Mantle and ZLog services. We also discuss latent capabilities we discovered in this process that let us navigate different trade-offs within the services themselves. First, we benchmark scenarios with high sequencer contention by examining the interfaces used to map ZLog onto Malacology; specifically, we describe the sequencer implementation and the propagation of object and data interfaces interfaces. Next, we benchmark scenarios in which the storage system manages multiple logs by using Mantle to balance sequencers across a cluster. Since this work focuses on the programmability of Malacology, the goal of this section is to show that the components and subsystems that support the Malacology interfaces provide reasonable relative performance, as well as to give examples of the flexibility that Malacology provides to programmers. This section uses a principled approach for evaluating tunables of the interfaces and the trade-offs we discuss should be acknowledged when building higher-level services. 6.1 Mapping ZLog onto Malacology We evaluate Malacology by exploring one possible mapping of the ZLog implementation of CORFU onto Ceph in which we re-use (1) the metadata service to manage naming and synchronization of the sequencer resource by treating the resource as an inode, and (2) the monitoring sub-subsystem to distribute and install application-specific I/O interfaces required of the CORFU protocol. In Section 6.2 we then demonstrate how the re-use of the inode abstraction for implementing the sequencer resource enables load balancing policies to migrate the sequencer resource in heavy-load situations. 6.1.1 Sequencer Implementation We evaluate the feasibility of using the metadata service to implement a sequencer resource that is responsible for maintaining a total ordering of the log. Clients contact the sequencer to obtain the tail of the log and then independently initiate I/O, thus we measure both the throughput and latency of obtaining new tail positions which bounds client append performance. The sequencer is implemented using the File Type interface so that the sequencer state (a 64-bit integer) is embedded in the inode of a file. A total ordering of the log is imposed by the re-use of the capability service that can be used to grant exclusive access of inode state to clients. The metadata service is responsible for maintaining exclusivity and granting access. Figure 5 (a) shows the behavior of the system in which a best-effort policy is used. The two colors represent points in time that the clients were able to access the resource. The best-effort policy shows a high degree of interleaving between clients but the system spends a large portion of time re-distributing the capability, reducing overall throughput. In order to control the performance of the system we implement a policy that (1) restricts the length of time that a client may maintain exclusive access and (2) limits the number of log positions that a client may generate without yielding to other clients waiting for access. The behavior of these two modes is illustrated in Figures 5 (b) and (c), respectively. Figure 6 demonstrates a configurable trade-off between throughput and latency. In the experiment two clients are run Each dot is an individual request, spread randomly along the y axis. The default behavior is unpredictable, "delay" lets clients hold the lease longer, and "quota" gives clients the lease for a number of operations. Sequencer throughput by re-using various services. The highest performance is achieved using a single client with exclusive, cacheable privilege. Round-robin sharing of the sequencer resource is affected by the amount of time the resource is held, with best-effort performing the worst. Each with a fixed 0.25 second maximum reservation on the capability, and we vary the size of the log position quota running each configuration for two minutes. The total operations per second is the combined throughput of the two clients, and the average latency is the number of microseconds required to obtain a new log position. With a small quota more time is spent exchanging exclusive access, while a large quota reservation allows clients to experience a much lower latency because they experience isolated access for a longer period of time. To get a better picture of latency, Figure 7 shows the CDF of latency for each client in all experiment configurations. At the 99th percentile clients accessed the sequencer in less than a millisecond. The CDF is cropped at the 99.999th percentile due to large outliers that we believe occur in instances in which the metadata server is performing I/O while it is in the process of re-distributing the capability to another client. Malacology exposes the internal capability management service and allows users to navigate latency and throughput trade-offs. Other approaches to designing the sequencer service also exist, such as using a centralized service in which each access is a network round-trip. In contrast to the mechanism we explored which is appropriate for clients with bursty workloads, it may be easier to provide predictable performance using a centralized service, and we will be exploring in the future how this can be achieved using the capability system. 6.1.2 Interface Propagation Domain-specific data interfaces (Section 2) allow co-design between applications and the storage system. Malacology supports custom object interfaces in RADOS that require interface implementations to be installed on the storage devices in the system, supporting the evolution of interfaces through automatic system-wide versioning and installation through the service metadata interface (Section 4.1). We evaluate the performance of installing a new interface version in the cluster, which is an important metric for applications that frequently evolve interfaces. We demonstrate the feasibility of utilizing the Ceph monitoring sub-system by evaluating the performance of installing and distributing interface updates. Figure 8 shows the CDF of the latency of interface updates. The interfaces are Lua scripts embedded in the cluster map and distributed using a peer-to-peer gossip protocol. The latency is defined as the elapsed time following the Paxos proposal for an interface update until each object storage daemon makes the update live (the cost of the Paxos proposal is configurable and is discussed below). The latency measurements were taken on the nodes running object server daemons, and thus exclude the client round-trip cost. In each of the experiments 1000 interface updates were observed. Figure 8 shows the lower bound cost for updates in a large cluster. In the experiment labeled “120 OSD (RAM)” a cluster of 120 object storage daemons (OSDs) using an in-memory data store were deployed, showing a latency of less than 54 ms with a probability of 90% and a worst case latency of 194 ms. These costs demonstrate the penalty of distributing the interface in a large cluster. In practice the costs include, in addition to cluster-wide propagation of interface updates, the network round-trip to the interface management service, the Paxos commit protocol itself, and other factors such as system load. By default Paxos proposals occur periodically with a 1 second interval in order to accumulate Figure 8: [source] Cluster-wide interface update latency, excluding the Paxos proposal cost for committing the Service Metadata interface. Figure 9: [source] CephFS/Mantle load balancing have better throughput than co-locating all sequencers on the same server. Sections 6.2.1 and 6.2.2 quantify this improvement; Section 6.2.3 examines the migration at 0-60 seconds. updates. In a minimum, realistic quorum of 3 monitors using hard drive-based storage, we were able to decrease this interval to an average of 222 ms. 6.2 Load Balancing ZLog Sequencers with Mantle In practice, a storage system implementing CORFU will support a multiplicity of independent totally-ordered logs for each application. For this scenario co-locating sequencers on the same physical node is not ideal but building a load balancer that can migrate the shared resource (e.g., the resource that mediates access to the tail of the log) is a time-consuming, non-trivial task. It requires building subsystems for migrating resources, monitoring the workloads, collecting metrics that describe the utilization on the physical nodes, partitioning resources, maintaining cache coherence, and managing multiple sequencers. The following experiments demonstrate the feasibility of using the mechanisms of the Malacology Load Balancing interface to inherit these features and to alleviate load from overloaded servers. The experiments are run on a cluster with 10 nodes to store objects, one node to monitor the cluster, and 3 nodes that can accommodate sequencers. Instead of measuring contention at the clients like Section 6.1.1, these experiments measure contention at the sequencers by forcing clients to make round-trips for every request. We implement this using the Shared Resource interface that forces round-trips. Because the sequencer’s only function is to hand out positions for the tail of the log, the workload is read-heavy. First, we show how the ZLog service can orchestrate multiple sequencers using the Malacology Load Balancing interface. Figure 9 shows the throughput over time of different load balancers as they migrate 3 sequencers (with 4 clients) around the cluster; “No Balancing” keeps all sequencers on one server. “CephFS” migrates sequencers using the hard-coded CephFS load balancers, and “Mantle” uses a custom load balancer we wrote specifically for sequencers. The increased throughput for the CephFS and Mantle curves between 0 and 60 seconds are a result of migrating the sequencer(s) off overloaded servers. In addition to showing that migrating sequencers improves performance, Figure 9 also demonstrates features that we will explore in the rest of this section. Sections 6.2.1 and 6.2.2 quantify the differences in performance when the cluster stabilizes at time 100 seconds and Section 6.2.3 examines the slope and start time of the re-balancing phase between 0 and 60 seconds by comparing the aggressiveness of the balancers. 6.2.1 Feature: Balancing Modes Next, we quantify the performance benefits shown in Figure 9. To understand why load balancers perform differently we need to explain the different balancing modes that the load balancer service uses and how they stress the subsystems that receive and forward client requests in different ways. In Figure 9, the CephFS curve shows the performance of the balancing mode that CephFS falls into most of the time. CephFS currently has 3 modes for balancing load: CPU mode, workload mode, and hybrid mode. All three have the same structure for making migration decisions but vary based on the metric used to calculate load. For this sequencer workload the 3 different modes all have the same performance, shown in Figure 10 (a), because the load balancer falls into the same mode a majority of the time. The high variation in performance for the CephFS CPU Mode bar reflects the uncertainty of using something as dynamic and unpredictable as CPU utilization to make migration decisions. In addition to the suboptimal performance and unpredictability, another problem is that all the CephFS bal- Mantle gives the administrator more control over balancing policies; for the Mantle bar in Figure 10 (a) we use the Load Balancing interface to program logic for balancing read-heavy workloads, resulting in better throughput and stability. When we did this we also identified two balancing modes relevant for making migration decisions for sequencers. Using Mantle, the administrator can put the load balancer into “proxy mode” or “client mode”. In proxy mode one server receives all requests and farms off the requests to slave servers; the slave servers do the actual tail finding operation. In client mode, clients interact directly with the server that has their sequencer. These modes are illustrated in Figure 11. “No Balancing” is when all sequencers are co-located on one physical server – performance for that mode is shown by the “No Balancing” curve in Figure 9. In “Proxy Mode”, clients continue sending requests to server A even though some of the sequencers have been migrated to another server. Server A redirects client requests for sequencer 2 to server B. “Proxy Mode (Half)” is shown in Figure 9; in this scenario, half of the sequencers have migrated off the first server. Alternatively, “Proxy Mode (Full)”, which is not pictured, is when all the sequencers migrate off the first server. “Client Mode”, shown on the far right of Figure 11, shows how clients for sequencer 2 contact server B without a redirect from server A. Figure 12 shows the throughput over time of the two different modes for an environment with only 2 sequencers (again 4 clients each) and 2 servers. The curves for both sequencers in Figure 12(a) start at less than 1000 ops/second and at time 60 seconds Mantle migrates Sequencer 1 to the slave server. Performance of Sequencer 2 decreases because it stayed on the proxy which now processes requests for Sequencer 2, and forwards requests for Sequencer 1. The performance of Sequencer 1 improves dramatically because distributing the sequencers in this way separates (1) the handling of the client requests and (2) finding the tail of the log and responding to clients. Doing both steps is too heavy weight for one server and sequencers on slave nodes can go faster if work is split up; this phenomenon is not uncommon and has been observed in chain replication [44]. Cluster throughput improves at the cost of decreased throughput for Sequencer 2. Figure 12(b) is set to sequencer mode manually (no balancing phase) and shows that the cluster throughput is worse than the cluster throughput of proxy mode. That graph also shows that Sequencer 2 has less throughput than Sequencer 1. In this case, the scatter-gather process used for cache coherence in the metadata protocols causes strain on the server housing Sequencer 2 resulting in this uneven performance. 6.2.2 Feature: Migration Units Another factor that affects performance in this environment is how much load is on each server; these experiments quantify that effect by programming the Load Balancing interface to control the amount of load to migrate. We call this metric a “migration unit”. Expressing this heuristic is not easily achievable with outward facing tunable parameters (i.e. system knobs) but with Mantle’s programmable interface, we can force the load balancer to change its migration units. To force the balancer into the Proxy Mode (Half) scenario in Figure 11, which uses migration units equal to half the load on the current server, we can use: \[ \text{targets[whoami+1]} = \text{mds[whoami]"load"}/2 \] This code snippet uses globally defined variables and tables from the Mantle API [37] to send half of the load on the current server (whoami) to the next ranked server (whoami + 1); the targets array is a globally defined table that the balancer uses to do the migrations. Alternatively, to migrate all load a time step, we can remove the division by 2. Figure 10 (b) shows the performance of the modes using different migration units. Recall that this setup only has 2 sequencers and 2 servers, so performance may be different at scale. Even so, it is clear that client mode does not perform as well for read-heavy workloads. We even see a throughput improvement when migrating all load off the first server, leaving the first server to do administrative tasks (this is com- mon in the metadata cluster because the first server does a lot of the cache coherence work) while the second server does all the processing. Proxy mode does the best in both cases and shows large performance gains when completely decoupling client request handling and operation processing in Proxy Mode (Full). The parameter that controls the migration units helps the administrator control the sequencer co-location or distribution across the cluster. This trade-off was explored extensively in the Mantle paper but the experiments we present here are indicative of an even richer set of states to explore. 6.2.3 Feature: Backoff Tuning the aggressiveness of the load balancer decision making is also a trade-off that administrators can control and explore. The balancing phase from 0 to 60 seconds in Figure 9 shows different degrees of aggressiveness in making migration decisions; CephFS makes a decision 10 seconds into the run and throughput jumps to 2500 ops/second while Mantle takes more time to stabilize. This conservative behavior is controlled by programming the balancer to (1) use different conditions for when to migrate and (2) using a threshold for sustained overload. We control the conditions for when to migrate using when(), a callback in the Mantle API. For the Mantle curve in Figure 9 we program when() to wait for load on the receiving server to fall below a threshold. This makes the balancer more conservative because it takes 60 seconds for cache coherence messages to settle. The Mantle curve in Figure 9 also takes longer to reach peak throughput because we want the policy to wait to see how migrations affect the system before proceeding; the balancer does a migration right before 50 seconds, realizes that there is a third underloaded server, and does another migration. The other way to change aggressiveness of the decision making is to program into the balancer a threshold for sustained overload. This forces the balancer to wait a certain number of iterations after a migration before proceeding. In Mantle, the policy would use the save state function to do a countdown after a migration. Behavior graphs and performance numbers for this backoff feature is omitted for space considerations, but our experiments confirm that the more conservative the approach the less overall throughput. Malacology pulls the load balancing service out of the storage system to balance sequencers across a cluster. This latent capability also gives future programmers the ability to explore the different load balancing trade-offs including: load balancing modes to control forwarding vs. client redirection, load migration units to control sequencer distribution vs. co-location, and backoffs to control conservative vs. aggressive decision making. 7. Future Work Malacology is a first step towards showing how general-purpose storage systems can be adapted to target special-purpose applications. By encapsulating storage system functionality as reusable building blocks, we enable application developers to leverage storage capabilities based on interfaces that are proven and understandable. However, creation and composition of interfaces is complex; constructs must be combined safely in order to provide correctness, performance and security. We will study additional Malacology-based services in order to learn techniques that support safe composition. Some higher-level services that we plan to build using the interfaces in Table 2 are: an elastic cloud database, a data processing engine, and a data layout manager. Approaches proposed so far use the Data I/O interface to push down predicates and computation, the File Type interface to maintain access paths and metadata efficiently, and the Durability interface to manage ingestion and movement. Using the programmable storage approach helps us build higher-level services that work well with the storage system not in spite of it. Our experience with ZLog and Mantle demonstrates that the labor of wrapping existing services in reusable interfaces is justified by the power and flexibility that this encapsulation affords to programmers. In exchange for this flexibility, however, programmers may forfeit the protection from change afforded by narrow storage interfaces such as the POSIX API. To implement applications on programmable storage systems such as Malacology, programmers must find solutions by navigating a complex design space, simultaneously addressing functional correctness, performance and fault tolerance. Worse still, their solutions may be sensitive to changes in the underlying environment, such as hardware upgrades, software version changes and evolving workloads. For example, a major version change in Ceph required us to rewrite significant parts of ZLog to maintain acceptable performance. Each such evolution costs developer time and risks introducing bugs. We are actively exploring the use of high-level declarative languages based on Datalog [2] to program data access and storage APIs. Using this approach, a systems programmer can specify the functional behavior in a relational (or algebraic) language, allowing an optimizer to search through the space of functionally equivalent physical implementations and select a good execution plan, re-optimizing when storage characteristics or statistics change. Much like query planning and optimization in database systems [24], this approach will separate the concerns of correctness and performance, protecting applications (which usually evolve slowly) against changes in more dynamic storage system. 8. Related Work Programmability of operating systems and networking resources, including distributed storage systems is not new, but we are not aware of work that makes generalization of existing services into programmable resources a key principle in storage systems design. Programmable storage systems can be viewed as an infrastructure for creating abstractions to better separate policies from mechanisms. This idea is not new. Software-defined networks (SDNs) create such an abstraction by separating the control plane from the data plane (see for example [27]). This notion of control/data separation was also applied in software-defined storage (SDS) [41, 43]. Similarly, IOStack [19] is providing policy-based provisioning and filtering in OpenStack Swift. According to a SNIA white paper [14], the primary goal of SDS is to control and facilitate flexible and dynamic provisioning of storage resources of different kinds, including flash memory and disk drives, to create a virtual mapping between common storage abstractions (e.g. files, objects, and blocks) and storage devices taking data service objectives in terms of protection, availability, performance, and security into account. A programmable storage system exposes internal abstractions so that end users (not necessarily operators) can create new services on top of the storage stack. Thus, our notion of programmable storage differs from “software-defined storage” (SDS) in terms of goals and scope, although definitions of SDS are still in flux. Another view of programmable storage systems is one of tailoring systems resources to applications [5]. Related efforts include the Exokernel [15], SPIN [11] and Vino [36] projects; the latter two addressed the ability of injecting code into the kernel to specialize resource management. Another approach is to pass hints between the different layers of the I/O stack to bridge the semantic gap between applications and storage [5, 33, 39]. Malacology uses the same Active and Typed Storage module presented in DataMods [47]; Asynchronous Service and File Manifolds can be implemented with small changes to the Malacology framework, namely asynchronous object calls and Lua stubs in the inode, respectively. 9. Conclusion Programmable storage is a viable method for eliminating duplication of complex error-prone software used as workarounds for storage system deficiencies. We propose that systems expose their services in a safe way allowing application developers to customize system behavior to meet their needs while not sacrificing correctness. To illustrate the benefits of this approach we presented Malacology2, a programmable storage system that facilitates the construction of new services by re-purposing existing subsystem abstractions of the storage stack. Acknowledgments We thank the EuroSys reviewers for their hard work, attentiveness, and genuinely helpful suggestions. We especially thank Mahesh Balakrishnan for shepherding the paper. This work was partially funded by the Center for Research in Open Source Software3, the DOE Award DE-SC0016074, and the NSF Award 1450488. Note: this paper follows The Popper Convention4 [28]. All the experiments presented here are available on the repository associated to this article5. For every figure, a [source] link points to a Jupyter notebook that shows the analysis from where the graph was obtained; its parent folder contains all the associated artifacts. References 2 http://programmability.us 3 http://cross.ucsc.edu 4 http://falsifiable.us 5 https://github.com/michaelsevilla/malacology/popper/tree/v2.1 the 37th International Conference on Very Large Data Bases, VLDB ’11, August 2011. [21] J. Gray. Tape is Dead, Disk is Tape, Flash is Disk, RAM Locality is King. CIDR 2007 - Gong Show Presentation, January 2007.
{"Source-Url": "https://users.soe.ucsc.edu/~msevilla/papers/sevilla-eurosys17.pdf", "len_cl100k_base": 12644, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 49192, "total-output-tokens": 16648, "length": "2e13", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.0006418228149414062, "__label__crime_law": 0.0002963542938232422, "__label__education_jobs": 0.0012941360473632812, "__label__entertainment": 0.00015664100646972656, "__label__fashion_beauty": 0.00019025802612304688, "__label__finance_business": 0.00048065185546875, "__label__food_dining": 0.0003268718719482422, "__label__games": 0.000988006591796875, "__label__hardware": 0.0038700103759765625, "__label__health": 0.00046753883361816406, "__label__history": 0.0005650520324707031, "__label__home_hobbies": 0.00016391277313232422, "__label__industrial": 0.0006017684936523438, "__label__literature": 0.0003604888916015625, "__label__politics": 0.00024080276489257812, "__label__religion": 0.0005202293395996094, "__label__science_tech": 0.31982421875, "__label__social_life": 9.882450103759766e-05, "__label__software": 0.036590576171875, "__label__software_dev": 0.630859375, "__label__sports_fitness": 0.00022101402282714844, "__label__transportation": 0.0006556510925292969, "__label__travel": 0.00024580955505371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74661, 0.01992]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74661, 0.28601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74661, 0.89088]], "google_gemma-3-12b-it_contains_pii": [[0, 3758, false], [3758, 8374, null], [8374, 13573, null], [13573, 18543, null], [18543, 20827, null], [20827, 26031, null], [26031, 31466, null], [31466, 37309, null], [37309, 42973, null], [42973, 47051, null], [47051, 51118, null], [51118, 55433, null], [55433, 61015, null], [61015, 66315, null], [66315, 71908, null], [71908, 74661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3758, true], [3758, 8374, null], [8374, 13573, null], [13573, 18543, null], [18543, 20827, null], [20827, 26031, null], [26031, 31466, null], [31466, 37309, null], [37309, 42973, null], [42973, 47051, null], [47051, 51118, null], [51118, 55433, null], [55433, 61015, null], [61015, 66315, null], [66315, 71908, null], [71908, 74661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74661, null]], "pdf_page_numbers": [[0, 3758, 1], [3758, 8374, 2], [8374, 13573, 3], [13573, 18543, 4], [18543, 20827, 5], [20827, 26031, 6], [26031, 31466, 7], [31466, 37309, 8], [37309, 42973, 9], [42973, 47051, 10], [47051, 51118, 11], [51118, 55433, 12], [55433, 61015, 13], [61015, 66315, 14], [66315, 71908, 15], [71908, 74661, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74661, 0.02632]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
932476769452c659da50be3190e9ad554894d596
[REMOVED]
{"len_cl100k_base": 10046, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40001, "total-output-tokens": 11064, "length": "2e13", "weborganizer": {"__label__adult": 0.0003230571746826172, "__label__art_design": 0.000274658203125, "__label__crime_law": 0.00030517578125, "__label__education_jobs": 0.001216888427734375, "__label__entertainment": 4.583597183227539e-05, "__label__fashion_beauty": 0.00014293193817138672, "__label__finance_business": 0.00011628866195678712, "__label__food_dining": 0.0003018379211425781, "__label__games": 0.0007534027099609375, "__label__hardware": 0.0005178451538085938, "__label__health": 0.0002887248992919922, "__label__history": 0.00017774105072021484, "__label__home_hobbies": 6.628036499023438e-05, "__label__industrial": 0.00022780895233154297, "__label__literature": 0.00032711029052734375, "__label__politics": 0.00018715858459472656, "__label__religion": 0.00039768218994140625, "__label__science_tech": 0.004001617431640625, "__label__social_life": 6.99162483215332e-05, "__label__software": 0.004779815673828125, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.00029206275939941406, "__label__transportation": 0.0003008842468261719, "__label__travel": 0.00017952919006347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49648, 0.01097]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49648, 0.79427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49648, 0.93159]], "google_gemma-3-12b-it_contains_pii": [[0, 2492, false], [2492, 5051, null], [5051, 7931, null], [7931, 10677, null], [10677, 13566, null], [13566, 16423, null], [16423, 20324, null], [20324, 23348, null], [23348, 26976, null], [26976, 28986, null], [28986, 31118, null], [31118, 34461, null], [34461, 37176, null], [37176, 40618, null], [40618, 44106, null], [44106, 47609, null], [47609, 49648, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2492, true], [2492, 5051, null], [5051, 7931, null], [7931, 10677, null], [10677, 13566, null], [13566, 16423, null], [16423, 20324, null], [20324, 23348, null], [23348, 26976, null], [26976, 28986, null], [28986, 31118, null], [31118, 34461, null], [34461, 37176, null], [37176, 40618, null], [40618, 44106, null], [44106, 47609, null], [47609, 49648, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49648, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 49648, null]], "pdf_page_numbers": [[0, 2492, 1], [2492, 5051, 2], [5051, 7931, 3], [7931, 10677, 4], [10677, 13566, 5], [13566, 16423, 6], [16423, 20324, 7], [20324, 23348, 8], [23348, 26976, 9], [26976, 28986, 10], [28986, 31118, 11], [31118, 34461, 12], [34461, 37176, 13], [37176, 40618, 14], [40618, 44106, 15], [44106, 47609, 16], [47609, 49648, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49648, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24